This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 9 succeeded
Started2022-05-18 05:52
Elapsed17m43s
Revision
Builderb99d561b-d66e-11ec-8bd0-da93db26d91c
infra-commit457184147
job-versionv1.25.0-alpha.0.553+eb88daeeae4a53
kubetest-version
repok8s.io/kubernetes
repo-commiteb88daeeae4a53a20579620e1791e30416517223
repos{u'k8s.io/kubernetes': u'master'}
revisionv1.25.0-alpha.0.553+eb88daeeae4a53

Test Failures


kubetest Node Tests 15m58s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-008 --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=3h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 9 Passed Tests

Error lines from build-log.txt

W0518 05:52:46.687] **************************************************************************
bootstrap.py is deprecated!
test-infra oncall does not support any job still using bootstrap.py.
Please migrate your job to podutils!
https://github.com/kubernetes/test-infra/blob/master/prow/pod-utilities.md
**************************************************************************
I0518 05:52:46.687] Args: --job=ci-kubernetes-node-swap-fedora-serial --service-account=/etc/service-account/service-account.json --upload=gs://kubernetes-jenkins/logs --repo=k8s.io/kubernetes=master --timeout=240 --root=/go/src --scenario=kubernetes_e2e -- --deployment=node --env=KUBE_SSH_USER=core --gcp-zone=us-west1-b --node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml '--node-test-args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"' --node-tests=true --provider=gce '--test_args=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]"' --timeout=180m
I0518 05:52:46.687] Bootstrap ci-kubernetes-node-swap-fedora-serial...
I0518 05:52:46.690] Builder: b99d561b-d66e-11ec-8bd0-da93db26d91c
I0518 05:52:46.690] Image: gcr.io/k8s-staging-test-infra/kubekins-e2e:v20220514-17efd5d2c3-master
I0518 05:52:46.690] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora-serial/1526802926893273088
I0518 05:52:46.690] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0518 05:52:47.405] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
... skipping 36 lines ...
echo $KUBE_GIT_VERSION
'
I0518 05:54:10.409] process 116 exited with code 0 after 0.0m
I0518 05:54:10.409] Start 1526802926893273088 at v1.25.0-alpha.0.553+eb88daeeae4a53...
I0518 05:54:10.411] Call:  gsutil -q -h Content-Type:application/json cp /tmp/gsutil_E_mBY4 gs://kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora-serial/1526802926893273088/started.json
I0518 05:54:12.064] process 149 exited with code 0 after 0.0m
I0518 05:54:12.065] Call:  /workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py --deployment=node --env=KUBE_SSH_USER=core --gcp-zone=us-west1-b --node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml '--node-test-args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"' --node-tests=true --provider=gce '--test_args=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]"' --timeout=180m
W0518 05:54:12.107] starts with local mode
W0518 05:54:12.107] Environment:
W0518 05:54:12.107] ARTIFACTS=/workspace/_artifacts
W0518 05:54:12.107] AWS_SSH_PRIVATE_KEY_FILE=/root/.ssh/kube_aws_rsa
W0518 05:54:12.108] AWS_SSH_PUBLIC_KEY_FILE=/root/.ssh/kube_aws_rsa.pub
W0518 05:54:12.108] BAZEL_REMOTE_CACHE_ENABLED=false
... skipping 65 lines ...
W0518 05:54:12.117] SHLVL=1
W0518 05:54:12.117] SOURCE_DATE_EPOCH=1652847533
W0518 05:54:12.118] TERM=xterm
W0518 05:54:12.118] USER=prow
W0518 05:54:12.118] WORKSPACE=/workspace
W0518 05:54:12.118] _=./test-infra/jenkins/bootstrap.py
W0518 05:54:12.119] Run: ('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-zone=us-west1-b', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml', '--node-test-args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\\"name\\": \\"crio.log\\", \\"journalctl\\": [\\"-u\\", \\"crio\\"]}"', '--node-tests=true', '--test_args=--nodes=1 --focus="\\[Serial\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\]"', '--timeout=180m')
W0518 05:54:12.298] 2022/05/18 05:54:12 Warning: Couldn't find directory src/sigs.k8s.io/cloud-provider-azure under any of GOPATH /go, defaulting to /go/src/k8s.io/cloud-provider-azure
W0518 05:54:12.298] 2022/05/18 05:54:12 main.go:284: Running kubetest version: 
W0518 05:54:12.298] 2022/05/18 05:54:12 main.go:344: Limiting testing to 3h0m0s
W0518 05:54:12.299] 2022/05/18 05:54:12 process.go:153: Running: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0518 05:54:13.014] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
W0518 05:54:13.124] 2022/05/18 05:54:13 process.go:155: Step 'gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json' finished in 825.354265ms
... skipping 10 lines ...
W0518 05:54:15.085] 2022/05/18 05:54:15 process.go:155: Step 'gcloud compute --project=k8s-infra-e2e-boskos-008 project-info describe' finished in 1.189793726s
W0518 05:54:15.085] 2022/05/18 05:54:15 node.go:62: Noop - Node KubectlCommand()
W0518 05:54:15.086] 2022/05/18 05:54:15 node.go:53: Noop - Node Down()
W0518 05:54:15.086] 2022/05/18 05:54:15 node.go:34: Noop - Node Up()
W0518 05:54:15.086] 2022/05/18 05:54:15 node.go:48: Noop - Node TestSetup()
W0518 05:54:15.086] 2022/05/18 05:54:15 e2e.go:590: cwd : /go/src/k8s.io/kubernetes
W0518 05:54:15.088] 2022/05/18 05:54:15 process.go:153: Running: go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-008 --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=3h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml
W0518 05:55:41.913] I0518 05:55:41.906511    6832 run_remote.go:530] found images [{creationTime:{wall:44000000 ext:63787468578 loc:0xc000198af0} name:fedora-coreos-35-20220424-3-0-gcp-x86-64} {creationTime:{wall:618000000 ext:63786609965 loc:0xc000198a80} name:fedora-coreos-35-20220410-3-1-gcp-x86-64} {creationTime:{wall:55000000 ext:63785294393 loc:0xc000198a10} name:fedora-coreos-35-20220327-3-0-gcp-x86-64} {creationTime:{wall:819000000 ext:63784097449 loc:0xc0001988c0} name:fedora-coreos-35-20220313-3-1-gcp-x86-64} {creationTime:{wall:31000000 ext:63782885704 loc:0xc0001987e0} name:fedora-coreos-35-20220227-3-0-gcp-x86-64} {creationTime:{wall:488000000 ext:63781699912 loc:0xc000198770} name:fedora-coreos-35-20220213-3-0-gcp-x86-64} {creationTime:{wall:399000000 ext:63780496842 loc:0xc000198690} name:fedora-coreos-35-20220131-3-0-gcp-x86-64} {creationTime:{wall:762000000 ext:63779330449 loc:0xc000198540} name:fedora-coreos-35-20220116-3-0-gcp-x86-64} {creationTime:{wall:864000000 ext:63778036619 loc:0xc000198460} name:fedora-coreos-35-20220103-3-0-gcp-x86-64} {creationTime:{wall:662000000 ext:63776925628 loc:0xc0001983f0} name:fedora-coreos-35-20211215-3-0-gcp-x86-64} {creationTime:{wall:616000000 ext:63775375847 loc:0xc0001982a0} name:fedora-coreos-35-20211203-3-0-gcp-x86-64} {creationTime:{wall:260000000 ext:63774243297 loc:0xc0001981c0} name:fedora-coreos-35-20211119-3-0-gcp-x86-64} {creationTime:{wall:575000000 ext:63772793331 loc:0xc000198150} name:fedora-coreos-35-20211029-3-0-gcp-x86-64} {creationTime:{wall:99000000 ext:63772017933 loc:0xc0004e0c40} name:fedora-coreos-34-20211031-3-0-gcp-x86-64} {creationTime:{wall:325000000 ext:63771383064 loc:0xc0004e0bd0} name:fedora-coreos-34-20211016-3-0-gcp-x86-64} {creationTime:{wall:789000000 ext:63770272142 loc:0xc0004e0b60} name:fedora-coreos-34-20211004-3-1-gcp-x86-64} {creationTime:{wall:89000000 ext:63768968575 loc:0xc0004e0af0} name:fedora-coreos-34-20210919-3-0-gcp-x86-64} {creationTime:{wall:940000000 ext:63767755640 loc:0xc0004e0a80} name:fedora-coreos-34-20210904-3-0-gcp-x86-64} {creationTime:{wall:232000000 ext:63766608898 loc:0xc0004e0a10} name:fedora-coreos-34-20210821-3-0-gcp-x86-64} {creationTime:{wall:713000000 ext:63765344634 loc:0xc0004e09a0} name:fedora-coreos-34-20210808-3-0-gcp-x86-64} {creationTime:{wall:639000000 ext:63764146424 loc:0xc0004e0930} name:fedora-coreos-34-20210725-3-0-gcp-x86-64} {creationTime:{wall:154000000 ext:63763019724 loc:0xc0004e0850} name:fedora-coreos-34-20210711-3-0-gcp-x86-64} {creationTime:{wall:834000000 ext:63762502471 loc:0xc0004e07e0} name:fedora-coreos-34-20210626-3-2-gcp-x86-64} {creationTime:{wall:406000000 ext:63761874663 loc:0xc0004e0770} name:fedora-coreos-34-20210626-3-1-gcp-x86-64} {creationTime:{wall:143000000 ext:63761808433 loc:0xc0004e0700} name:fedora-coreos-34-20210626-3-0-gcp-x86-64} {creationTime:{wall:957000000 ext:63760496866 loc:0xc0004e0690} name:fedora-coreos-34-20210611-3-0-gcp-x86-64} {creationTime:{wall:867000000 ext:63759284388 loc:0xc0004e0620} name:fedora-coreos-34-20210529-3-0-gcp-x86-64} {creationTime:{wall:386000000 ext:63758178633 loc:0xc0004e05b0} name:fedora-coreos-34-20210518-3-0-gcp-x86-64} {creationTime:{wall:175000000 ext:63756928737 loc:0xc0004e0540} name:fedora-coreos-34-20210427-3-0-gcp-x86-64} {creationTime:{wall:689000000 ext:63755682445 loc:0xc0004e04d0} name:fedora-coreos-33-20210426-3-0-gcp-x86-64} {creationTime:{wall:179000000 ext:63755148404 loc:0xc0004e0460} name:fedora-coreos-33-20210412-3-0-gcp-x86-64} {creationTime:{wall:695000000 ext:63753842322 loc:0xc0004e03f0} name:fedora-coreos-33-20210328-3-0-gcp-x86-64} {creationTime:{wall:182000000 ext:63752629582 loc:0xc0004e0380} name:fedora-coreos-33-20210314-3-0-gcp-x86-64} {creationTime:{wall:603000000 ext:63751541641 loc:0xc0004e0310} name:fedora-coreos-33-20210301-3-1-gcp-x86-64} {creationTime:{wall:247000000 ext:63751525446 loc:0xc0004e02a0} name:fedora-coreos-33-20210301-3-0-gcp-x86-64} {creationTime:{wall:879000000 ext:63750277414 loc:0xc0004e0230} name:fedora-coreos-33-20210217-3-0-gcp-x86-64} {creationTime:{wall:898000000 ext:63749175049 loc:0xc0004e01c0} name:fedora-coreos-33-20210201-3-0-gcp-x86-64} {creationTime:{wall:459000000 ext:63747976139 loc:0xc0004e0150} name:fedora-coreos-33-20210117-3-2-gcp-x86-64} {creationTime:{wall:64000000 ext:63747902485 loc:0xc0004e00e0} name:fedora-coreos-33-20210117-3-1-gcp-x86-64} {creationTime:{wall:900000000 ext:63747880242 loc:0xc0004e0070} name:fedora-coreos-33-20210117-3-0-gcp-x86-64} {creationTime:{wall:640000000 ext:63747381482 loc:0xc0004e4c40} name:fedora-coreos-33-20210104-3-1-gcp-x86-64} {creationTime:{wall:850000000 ext:63746570094 loc:0xc0004e4bd0} name:fedora-coreos-33-20210104-3-0-gcp-x86-64} {creationTime:{wall:509000000 ext:63745405866 loc:0xc0004e4b60} name:fedora-coreos-33-20201214-3-1-gcp-x86-64} {creationTime:{wall:909000000 ext:63745375064 loc:0xc0004e4af0} name:fedora-coreos-33-20201214-3-0-gcp-x86-64} {creationTime:{wall:150000000 ext:63743763067 loc:0xc0004e4a80} name:fedora-coreos-33-20201201-3-0-gcp-x86-64} {creationTime:{wall:66000000 ext:63741211980 loc:0xc0004e4a10} name:fedora-coreos-32-20201104-3-0-gcp-x86-64} {creationTime:{wall:743000000 ext:63739953194 loc:0xc0004e49a0} name:fedora-coreos-32-20201018-3-0-gcp-x86-64} {creationTime:{wall:808000000 ext:63738728713 loc:0xc0004e4930} name:fedora-coreos-32-20201004-3-0-gcp-x86-64} {creationTime:{wall:289000000 ext:63737517804 loc:0xc0004e48c0} name:fedora-coreos-32-20200923-3-0-gcp-x86-64} {creationTime:{wall:703000000 ext:63736449686 loc:0xc0004e4850} name:fedora-coreos-32-20200907-3-0-gcp-x86-64} {creationTime:{wall:956000000 ext:63735156309 loc:0xc0004e4770} name:fedora-coreos-32-20200824-3-0-gcp-x86-64} {creationTime:{wall:852000000 ext:63733888322 loc:0xc0004e4700} name:fedora-coreos-32-20200809-3-0-gcp-x86-64} {creationTime:{wall:457000000 ext:63732810708 loc:0xc0004e4690} name:fedora-coreos-32-20200726-3-1-gcp-x86-64} {creationTime:{wall:780000000 ext:63732682701 loc:0xc0004e4620} name:fedora-coreos-32-20200726-3-0-gcp-x86-64} {creationTime:{wall:743000000 ext:63731450519 loc:0xc0004e45b0} name:fedora-coreos-32-20200715-3-0-gcp-x86-64} {creationTime:{wall:573000000 ext:63730004582 loc:0xc0004e4540} name:fedora-coreos-32-20200629-3-0-gcp-x86-64} {creationTime:{wall:38000000 ext:63729110330 loc:0xc0004e44d0} name:fedora-coreos-32-20200615-3-0-gcp-x86-64} {creationTime:{wall:236000000 ext:63727898326 loc:0xc0004e4460} name:fedora-coreos-32-20200601-3-0-gcp-x86-64} {creationTime:{wall:890000000 ext:63726632498 loc:0xc0004e43f0} name:fedora-coreos-31-20200517-3-0-gcp-x86-64} {creationTime:{wall:427000000 ext:63725496193 loc:0xc0004e4310} name:fedora-coreos-31-20200505-3-0-gcp-x86-64} {creationTime:{wall:903000000 ext:63724833368 loc:0xc0004e42a0} name:fedora-coreos-31-20200420-3-0-gcp-x86-64} {creationTime:{wall:938000000 ext:63723098515 loc:0xc0004e41c0} name:fedora-coreos-31-20200407-3-0-gcp-x86-64} {creationTime:{wall:763000000 ext:63722058314 loc:0xc0004e40e0} name:fedora-coreos-31-20200323-3-2-gcp-x86-64}] based on regex "" and family "fedora-coreos-stable" in project "fedora-coreos-cloud"
W0518 05:55:41.914] I0518 05:55:41.906651    6832 run_remote.go:438] parsing instance metadata: "user-data</workspace/test-infra/jobs/e2e_node/swap/crio_swap1g.ign"
W0518 05:55:41.914] I0518 05:55:41.906720    6832 run_remote.go:921] Injecting SSH public key into ignition
W0518 05:55:41.914] I0518 05:55:41.906751    6832 run_remote.go:440] parsed instance metadata: map[user-data:{
W0518 05:55:41.914]   "ignition": {
W0518 05:55:41.914]     "version": "3.3.0"
... skipping 36 lines ...
W0518 05:55:41.920]       {
W0518 05:55:41.921]         "contents": "[Unit]\nDescription=Enable swap on CoreOS\nBefore=crio-install.service\nConditionFirstBoot=no\n\n[Service]\nType=oneshot\nExecStart=/bin/sh -c \"sudo dd if=/dev/zero of=/var/swapfile count=1024 bs=1MiB && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && sudo swapon /var/swapfile && free -h\"\n[Install]\n\nWantedBy=multi-user.target\n",
W0518 05:55:41.921]         "enabled": true,
W0518 05:55:41.921]         "name": "swap-enable.service"
W0518 05:55:41.921]       },
W0518 05:55:41.922]       {
W0518 05:55:41.922]         "contents": "[Unit]\nDescription=Download and install crio binaries and configurations.\nAfter=network-online.target\n\n[Service]\nType=oneshot\nExecStartPre=/usr/bin/bash -c '/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -o /usr/local/crio-nodee2e-installer.sh  https://raw.githubusercontent.com/cri-o/cri-o/74583d26406963ba150004f343bc36c16a861164/scripts/node_e2e_installer'\nExecStart=/usr/bin/bash /usr/local/crio-nodee2e-installer.sh\n\n[Install]\nWantedBy=multi-user.target\n",
W0518 05:55:41.922]         "enabled": true,
W0518 05:55:41.923]         "name": "crio-install.service"
W0518 05:55:41.923]       }
W0518 05:55:41.923]     ]
W0518 05:55:41.923]   }
W0518 05:55:41.923] }
... skipping 6 lines ...
W0518 05:55:42.125] I0518 05:55:42.049723    6832 run_remote.go:596] Creating instance {image:fedora-coreos-35-20220424-3-0-gcp-x86-64 imageDesc:fedora-coreos-35-20220424-3-0-gcp-x86-64 kernelArguments:[] project:fedora-coreos-cloud resources:{Accelerators:[]} metadata:0xc000198d20 machine:n1-standard-2 tests:[]} with service account "890593655482-compute@developer.gserviceaccount.com"
I0518 05:55:43.292] +++ [0518 05:55:43] Building go targets for linux/amd64
I0518 05:55:43.312]     k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
I0518 05:55:56.897] +++ [0518 05:55:56] Building go targets for linux/amd64
I0518 05:55:56.916]     k8s.io/code-generator/cmd/prerelease-lifecycle-gen (non-static)
I0518 05:56:02.762] +++ [0518 05:56:02] Generating prerelease lifecycle code for 26 targets
W0518 05:56:03.929] I0518 05:56:03.929143    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
I0518 05:56:05.213] +++ [0518 05:56:05] Building go targets for linux/amd64
I0518 05:56:05.232]     k8s.io/code-generator/cmd/deepcopy-gen (non-static)
I0518 05:56:07.325] +++ [0518 05:56:07] Generating deepcopy code for 236 targets
I0518 05:56:14.356] +++ [0518 05:56:14] Building go targets for linux/amd64
I0518 05:56:14.375]     k8s.io/code-generator/cmd/defaulter-gen (non-static)
I0518 05:56:15.657] +++ [0518 05:56:15] Generating defaulter code for 92 targets
I0518 05:56:25.946] +++ [0518 05:56:25] Building go targets for linux/amd64
I0518 05:56:25.973]     k8s.io/code-generator/cmd/conversion-gen (non-static)
I0518 05:56:27.629] +++ [0518 05:56:27] Generating conversion code for 129 targets
I0518 05:56:48.242] +++ [0518 05:56:48] Building go targets for linux/amd64
I0518 05:56:48.262]     k8s.io/kube-openapi/cmd/openapi-gen (non-static)
I0518 05:56:57.154] +++ [0518 05:56:57] Generating openapi code for KUBE
W0518 05:57:11.273] E0518 05:57:11.272983    6832 ssh.go:123] failed to run SSH command: out: , err: exit status 1
I0518 05:57:21.148] +++ [0518 05:57:21] Generating openapi code for AGGREGATOR
I0518 05:57:22.710] +++ [0518 05:57:22] Generating openapi code for APIEXTENSIONS
I0518 05:57:24.537] +++ [0518 05:57:24] Generating openapi code for CODEGEN
I0518 05:57:26.087] +++ [0518 05:57:26] Generating openapi code for SAMPLEAPISERVER
I0518 05:57:27.618] make[1]: Leaving directory '/go/src/k8s.io/kubernetes'
I0518 05:57:28.016] +++ [0518 05:57:28] Building go targets for linux/amd64
I0518 05:57:28.035]     k8s.io/kubernetes/cmd/kubelet (non-static)
I0518 05:57:28.035]     k8s.io/kubernetes/test/e2e_node/e2e_node.test (test)
I0518 05:57:28.040]     github.com/onsi/ginkgo/ginkgo (non-static)
I0518 05:57:28.045]     k8s.io/kubernetes/cluster/gce/gci/mounter (non-static)
I0518 05:57:28.050]     k8s.io/kubernetes/test/e2e_node/plugins/gcp-credential-provider (non-static)
W0518 05:57:31.667] I0518 05:57:31.666306    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0518 05:57:33.195] E0518 05:57:33.195592    6832 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0518 05:57:53.492] I0518 05:57:53.491281    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0518 05:57:54.892] E0518 05:57:54.891944    6832 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0518 05:58:15.169] I0518 05:58:15.169060    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0518 05:58:16.569] E0518 05:58:16.569113    6832 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0518 05:58:36.825] I0518 05:58:36.824942    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
W0518 05:58:38.124] E0518 05:58:38.124609    6832 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0518 05:58:58.467] I0518 05:58:58.466449    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'systemctl list-units  --type=service  --state=running | grep -e containerd -e crio']
I0518 06:05:28.547] make: Leaving directory '/go/src/k8s.io/kubernetes'
W0518 06:05:42.154] I0518 06:05:42.154161    6832 remote.go:106] Staging test binaries on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:42.155] I0518 06:05:42.154337    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- mkdir /tmp/node-e2e-20220518T060542]
W0518 06:05:43.205] I0518 06:05:43.204800    6832 ssh.go:120] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine /go/src/k8s.io/kubernetes/e2e_node_test.tar.gz core@35.247.27.146:/tmp/node-e2e-20220518T060542/]
W0518 06:05:51.864] I0518 06:05:51.864165    6832 remote.go:133] Extracting tar on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:51.865] I0518 06:05:51.864235    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sh -c 'cd /tmp/node-e2e-20220518T060542 && tar -xzvf ./e2e_node_test.tar.gz']
W0518 06:05:54.827] I0518 06:05:54.827434    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- mkdir /tmp/node-e2e-20220518T060542/results]
W0518 06:05:55.444] I0518 06:05:55.443820    6832 remote.go:148] Running test on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:55.444] I0518 06:05:55.443864    6832 utils.go:66] Install CNI on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:55.445] I0518 06:05:55.443908    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20220518T060542/cni/bin ; curl -s -L https://storage.googleapis.com/k8s-artifacts-cni/release/v0.9.1/cni-plugins-linux-amd64-v0.9.1.tgz | tar -xz -C /tmp/node-e2e-20220518T060542/cni/bin']
W0518 06:05:57.126] I0518 06:05:57.126569    6832 utils.go:79] Adding CNI configuration on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:57.127] I0518 06:05:57.126659    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'mkdir -p /tmp/node-e2e-20220518T060542/cni/net.d ; echo '"'"'{
W0518 06:05:57.127]   "name": "mynet",
W0518 06:05:57.127]   "type": "bridge",
W0518 06:05:57.127]   "bridge": "mynet0",
W0518 06:05:57.127]   "isDefaultGateway": true,
W0518 06:05:57.128]   "forceAddress": false,
W0518 06:05:57.128]   "ipMasq": true,
... skipping 2 lines ...
W0518 06:05:57.128]     "type": "host-local",
W0518 06:05:57.128]     "subnet": "10.10.0.0/16"
W0518 06:05:57.129]   }
W0518 06:05:57.129] }
W0518 06:05:57.129] '"'"' > /tmp/node-e2e-20220518T060542/cni/net.d/mynet.conf']
W0518 06:05:57.847] I0518 06:05:57.846866    6832 utils.go:106] Configure iptables firewall rules on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:57.847] I0518 06:05:57.846931    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'iptables -I INPUT 1 -w -p tcp -j ACCEPT&&iptables -I INPUT 1 -w -p udp -j ACCEPT&&iptables -I INPUT 1 -w -p icmp -j ACCEPT&&iptables -I FORWARD 1 -w -p tcp -j ACCEPT&&iptables -I FORWARD 1 -w -p udp -j ACCEPT&&iptables -I FORWARD 1 -w -p icmp -j ACCEPT']
W0518 06:05:58.508] I0518 06:05:58.507711    6832 utils.go:92] Configuring kubelet credential provider on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:58.508] I0518 06:05:58.507788    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'echo '"'"'kind: CredentialProviderConfig
W0518 06:05:58.508] apiVersion: kubelet.config.k8s.io/v1beta1
W0518 06:05:58.509] providers:
W0518 06:05:58.509]   - name: gcp-credential-provider
W0518 06:05:58.509]     apiVersion: credentialprovider.kubelet.k8s.io/v1beta1
W0518 06:05:58.509]     matchImages:
W0518 06:05:58.509]     - "gcr.io"
W0518 06:05:58.509]     - "*.gcr.io"
W0518 06:05:58.509]     - "container.cloud.google.com"
W0518 06:05:58.510]     - "*.pkg.dev"
W0518 06:05:58.510]     defaultCacheDuration: 1m'"'"' > /tmp/node-e2e-20220518T060542/credential-provider.yaml']
W0518 06:05:59.145] I0518 06:05:59.145022    6832 utils.go:127] Killing any existing node processes on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:05:59.145] I0518 06:05:59.145081    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'pkill kubelet ; pkill kube-apiserver ; pkill etcd ; pkill e2e_node.test']
W0518 06:05:59.807] E0518 06:05:59.807202    6832 ssh.go:123] failed to run SSH command: out: , err: exit status 1
W0518 06:05:59.808] I0518 06:05:59.807287    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo cat /etc/os-release]
W0518 06:06:00.438] I0518 06:06:00.437776    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c '/usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220518T060542/kubelet && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220518T060542/e2e_node.test && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220518T060542/ginkgo && /usr/bin/chcon -u system_u -r object_r -t bin_t /tmp/node-e2e-20220518T060542/mounter && /usr/bin/chcon -R -u system_u -r object_r -t bin_t /tmp/node-e2e-20220518T060542/cni/bin']
W0518 06:06:01.080] I0518 06:06:01.080703    6832 node_e2e.go:200] Starting tests on "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:06:01.082] I0518 06:06:01.080781    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'cd /tmp/node-e2e-20220518T060542 && timeout -k 30s 10800.000000s ./ginkgo --nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --report-dir=/tmp/node-e2e-20220518T060542/results --report-prefix=fedora --image-description="fedora-coreos-35-20220424-3-0-gcp-x86-64" --feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"']
W0518 06:10:08.647] E0518 06:10:08.646433    6832 ssh.go:123] failed to run SSH command: out: Flag --logtostderr has been deprecated, will be removed in a future release, see https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/2845-deprecate-klog-specific-flags-in-k8s-components
W0518 06:10:08.647] W0518 06:06:01.793732    2450 test_context.go:458] Unable to find in-cluster config, using default host : https://127.0.0.1:6443
W0518 06:10:08.647] I0518 06:06:01.793965    2450 test_context.go:475] Tolerating taints "node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master" when considering if nodes are ready
W0518 06:10:08.647] May 18 06:06:01.794: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
W0518 06:10:08.648] I0518 06:06:01.794376    2450 feature_gate.go:245] feature gates: &{map[NodeSwap:true]}
W0518 06:10:08.648] I0518 06:06:01.877303    2450 mount_linux.go:222] Detected OS with systemd
W0518 06:10:08.648] I0518 06:06:01.891151    2450 mount_linux.go:222] Detected OS with systemd
... skipping 56 lines ...
W0518 06:10:08.659] I0518 06:06:02.075634    2450 remote_runtime.go:118] "Using CRI v1 runtime API"
W0518 06:10:08.659] I0518 06:06:02.075790    2450 remote_image.go:45] "Connecting to image service" endpoint="unix:///var/run/crio/crio.sock"
W0518 06:10:08.659] I0518 06:06:02.075995    2450 remote_image.go:87] "Finding the CRI API image version"
W0518 06:10:08.659] I0518 06:06:02.077983    2450 remote_image.go:91] "Using CRI v1 image API"
W0518 06:10:08.660] I0518 06:06:02.078086    2450 image_list.go:157] Pre-pulling images with CRI [docker.io/nfvpe/sriov-device-plugin:v3.1 gcr.io/cadvisor/cadvisor:v0.43.0 k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.36 k8s.gcr.io/e2e-test-images/busybox:1.29-2 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 k8s.gcr.io/e2e-test-images/ipc-utils:1.3 k8s.gcr.io/e2e-test-images/nginx:1.14-2 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.2 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.2 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/sample-device-plugin:1.3 k8s.gcr.io/e2e-test-images/volume/gluster:1.3 k8s.gcr.io/e2e-test-images/volume/nfs:1.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.7 k8s.gcr.io/stress:v1 quay.io/kubevirt/device-plugin-kvm]
W0518 06:10:08.661] I0518 06:08:12.045983    2450 e2e_node_suite_test.go:280] Locksmithd is masked successfully
W0518 06:10:08.662] I0518 06:08:12.046118    2450 server.go:102] Starting server "services" with command "/tmp/node-e2e-20220518T060542/e2e_node.test --run-services-mode --bearer-token=0sZDJjbLZbX29DiP --test.timeout=24h0m0s --ginkgo.seed=1652853961 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Slow\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --report-dir=/tmp/node-e2e-20220518T060542/results --report-prefix=fedora --image-description=fedora-coreos-35-20220424-3-0-gcp-x86-64 --feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags=--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --extra-log={\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"
W0518 06:10:08.662] I0518 06:08:12.046149    2450 util.go:48] Running readiness check for service "services"
W0518 06:10:08.662] I0518 06:08:12.046739    2450 server.go:130] Output file for server "services": /tmp/node-e2e-20220518T060542/results/services.log
W0518 06:10:08.663] I0518 06:08:12.047290    2450 server.go:160] Waiting for server "services" start command to complete
W0518 06:10:08.663] W0518 06:08:13.046662    2450 util.go:104] Health check on "https://127.0.0.1:6443/healthz" failed, error=Head "https://127.0.0.1:6443/healthz": dial tcp 127.0.0.1:6443: connect: connection refused
W0518 06:10:08.663] W0518 06:08:15.856021    2450 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
W0518 06:10:08.663] I0518 06:08:16.857782    2450 services.go:68] Node services started.
W0518 06:10:08.663] I0518 06:08:16.857797    2450 kubelet.go:154] Starting kubelet
W0518 06:10:08.664] I0518 06:08:16.866789    2450 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=append:/tmp/node-e2e-20220518T060542/results/kubelet.log --unit=kubelet-20220518T060542.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20220518T060542/kubelet --kubeconfig /tmp/node-e2e-20220518T060542/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates NodeSwap=true --hostname-override n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220518T060542/kubelet-config --fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service"
W0518 06:10:08.665] I0518 06:08:16.866938    2450 util.go:48] Running readiness check for service "kubelet"
W0518 06:10:08.665] I0518 06:08:16.867188    2450 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20220518T060542/results/kubelet.log
W0518 06:10:08.665] I0518 06:08:16.867687    2450 server.go:160] Waiting for server "kubelet" start command to complete
W0518 06:10:08.665] I0518 06:08:17.898119    2450 services.go:78] Kubelet started.
W0518 06:10:08.665] I0518 06:08:17.898545    2450 e2e_node_suite_test.go:226] Wait for the node to be ready
W0518 06:10:08.666] May 18 06:08:28.944: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
... skipping 93 lines ...
W0518 06:10:08.681] STEP: Configuring hugepages
W0518 06:10:08.681] May 18 06:08:29.109: INFO: Hugepages total is set to 8
W0518 06:10:08.681] [BeforeEach] with static policy
W0518 06:10:08.681]   test/e2e_node/util.go:165
W0518 06:10:08.681] STEP: Stopping the kubelet
W0518 06:10:08.682] May 18 06:08:29.174: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
W0518 06:10:08.682]   kubelet-20220518T060542.service loaded active running /tmp/node-e2e-20220518T060542/kubelet --kubeconfig /tmp/node-e2e-20220518T060542/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates NodeSwap=true --hostname-override n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220518T060542/kubelet-config --fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
W0518 06:10:08.682] 
W0518 06:10:08.683] LOAD   = Reflects whether the unit definition was properly loaded.
W0518 06:10:08.683] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
W0518 06:10:08.683] SUB    = The low-level unit activation state, values depend on unit type.
W0518 06:10:08.683] 1 loaded units listed.
W0518 06:10:08.683] , kubelet-20220518T060542
W0518 06:10:08.683] W0518 06:08:29.331350    2450 util.go:388] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58500->127.0.0.1:10248: read: connection reset by peer
W0518 06:10:08.683] STEP: Starting the kubelet
W0518 06:10:08.684] W0518 06:08:29.428045    2450 util.go:388] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
W0518 06:10:08.684] May 18 06:08:34.431: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0518 06:10:08.684] May 18 06:08:35.436: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0518 06:10:08.685] May 18 06:08:36.440: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0518 06:10:08.685] May 18 06:08:37.443: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0518 06:10:08.685] May 18 06:08:38.447: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
W0518 06:10:08.686] May 18 06:08:39.450: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 60 lines ...
W0518 06:10:08.696] May 18 06:09:18.492: INFO: Pod memory-manager-static2gpn6 still exists
W0518 06:10:08.696] May 18 06:09:20.489: INFO: Waiting for pod memory-manager-static2gpn6 to disappear
W0518 06:10:08.696] May 18 06:09:20.493: INFO: Pod memory-manager-static2gpn6 still exists
W0518 06:10:08.697] May 18 06:09:22.488: INFO: Waiting for pod memory-manager-static2gpn6 to disappear
W0518 06:10:08.697] May 18 06:09:22.492: INFO: Pod memory-manager-static2gpn6 still exists
W0518 06:10:08.697] , err: exit status 255
W0518 06:10:08.697] I0518 06:10:08.646605    6832 remote.go:233] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0518 06:10:08.698] I0518 06:10:08.646650    6832 ssh.go:120] Running the command ssh, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'journalctl --system --all > /tmp/20220518T061008-system.log']
W0518 06:10:11.206] I0518 06:10:11.206302    6832 remote.go:238] Got the system logs from journald; copying it back...
W0518 06:10:11.207] I0518 06:10:11.206379    6832 ssh.go:120] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146:/tmp/20220518T061008-system.log /workspace/_artifacts/n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f-system.log]
W0518 06:10:12.219] I0518 06:10:12.218813    6832 remote.go:158] Copying test artifacts from "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
W0518 06:10:12.219] I0518 06:10:12.219032    6832 ssh.go:120] Running the command scp, with args: [-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r core@35.247.27.146:/tmp/node-e2e-20220518T060542/results/*.log /workspace/_artifacts/n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f]
W0518 06:10:12.901] E0518 06:10:12.901225    6832 ssh.go:123] failed to run SSH command: out: scp: /tmp/node-e2e-20220518T060542/results/*.log: No such file or directory
W0518 06:10:12.901] , err: exit status 1
W0518 06:10:13.115] I0518 06:10:13.115403    6832 run_remote.go:872] Deleting instance "n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f"
I0518 06:10:13.638] 
I0518 06:10:13.639] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0518 06:10:13.639] >                              START TEST                                >
I0518 06:10:13.639] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
... skipping 64 lines ...
I0518 06:10:13.650] I0518 06:06:02.075634    2450 remote_runtime.go:118] "Using CRI v1 runtime API"
I0518 06:10:13.650] I0518 06:06:02.075790    2450 remote_image.go:45] "Connecting to image service" endpoint="unix:///var/run/crio/crio.sock"
I0518 06:10:13.650] I0518 06:06:02.075995    2450 remote_image.go:87] "Finding the CRI API image version"
I0518 06:10:13.651] I0518 06:06:02.077983    2450 remote_image.go:91] "Using CRI v1 image API"
I0518 06:10:13.652] I0518 06:06:02.078086    2450 image_list.go:157] Pre-pulling images with CRI [docker.io/nfvpe/sriov-device-plugin:v3.1 gcr.io/cadvisor/cadvisor:v0.43.0 k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/e2e-test-images/agnhost:2.36 k8s.gcr.io/e2e-test-images/busybox:1.29-2 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 k8s.gcr.io/e2e-test-images/ipc-utils:1.3 k8s.gcr.io/e2e-test-images/nginx:1.14-2 k8s.gcr.io/e2e-test-images/node-perf/npb-ep:1.2 k8s.gcr.io/e2e-test-images/node-perf/npb-is:1.2 k8s.gcr.io/e2e-test-images/node-perf/tf-wide-deep:1.2 k8s.gcr.io/e2e-test-images/nonewprivs:1.3 k8s.gcr.io/e2e-test-images/nonroot:1.2 k8s.gcr.io/e2e-test-images/perl:5.26 k8s.gcr.io/e2e-test-images/sample-device-plugin:1.3 k8s.gcr.io/e2e-test-images/volume/gluster:1.3 k8s.gcr.io/e2e-test-images/volume/nfs:1.3 k8s.gcr.io/etcd:3.5.3-0 k8s.gcr.io/node-problem-detector/node-problem-detector:v0.8.7 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.7 k8s.gcr.io/stress:v1 quay.io/kubevirt/device-plugin-kvm]
I0518 06:10:13.652] I0518 06:08:12.045983    2450 e2e_node_suite_test.go:280] Locksmithd is masked successfully
I0518 06:10:13.653] I0518 06:08:12.046118    2450 server.go:102] Starting server "services" with command "/tmp/node-e2e-20220518T060542/e2e_node.test --run-services-mode --bearer-token=0sZDJjbLZbX29DiP --test.timeout=24h0m0s --ginkgo.seed=1652853961 --ginkgo.focus=\\[Serial\\] --ginkgo.skip=\\[Flaky\\]|\\[Slow\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\] --ginkgo.slowSpecThreshold=5.00000 --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --report-dir=/tmp/node-e2e-20220518T060542/results --report-prefix=fedora --image-description=fedora-coreos-35-20220424-3-0-gcp-x86-64 --feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags=--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service --extra-log={\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"
I0518 06:10:13.653] I0518 06:08:12.046149    2450 util.go:48] Running readiness check for service "services"
I0518 06:10:13.653] I0518 06:08:12.046739    2450 server.go:130] Output file for server "services": /tmp/node-e2e-20220518T060542/results/services.log
I0518 06:10:13.654] I0518 06:08:12.047290    2450 server.go:160] Waiting for server "services" start command to complete
I0518 06:10:13.654] W0518 06:08:13.046662    2450 util.go:104] Health check on "https://127.0.0.1:6443/healthz" failed, error=Head "https://127.0.0.1:6443/healthz": dial tcp 127.0.0.1:6443: connect: connection refused
I0518 06:10:13.654] W0518 06:08:15.856021    2450 util.go:106] Health check on "https://127.0.0.1:6443/healthz" failed, status=500
I0518 06:10:13.654] I0518 06:08:16.857782    2450 services.go:68] Node services started.
I0518 06:10:13.654] I0518 06:08:16.857797    2450 kubelet.go:154] Starting kubelet
I0518 06:10:13.655] I0518 06:08:16.866789    2450 server.go:102] Starting server "kubelet" with command "/usr/bin/systemd-run -p Delegate=true -p StandardError=append:/tmp/node-e2e-20220518T060542/results/kubelet.log --unit=kubelet-20220518T060542.service --slice=runtime.slice --remain-after-exit /tmp/node-e2e-20220518T060542/kubelet --kubeconfig /tmp/node-e2e-20220518T060542/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates NodeSwap=true --hostname-override n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220518T060542/kubelet-config --fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service"
I0518 06:10:13.655] I0518 06:08:16.866938    2450 util.go:48] Running readiness check for service "kubelet"
I0518 06:10:13.656] I0518 06:08:16.867188    2450 server.go:130] Output file for server "kubelet": /tmp/node-e2e-20220518T060542/results/kubelet.log
I0518 06:10:13.656] I0518 06:08:16.867687    2450 server.go:160] Waiting for server "kubelet" start command to complete
I0518 06:10:13.656] I0518 06:08:17.898119    2450 services.go:78] Kubelet started.
I0518 06:10:13.656] I0518 06:08:17.898545    2450 e2e_node_suite_test.go:226] Wait for the node to be ready
I0518 06:10:13.657] May 18 06:08:28.944: INFO: Parsing ds from https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/device-plugins/nvidia-gpu/daemonset.yaml
... skipping 93 lines ...
I0518 06:10:13.671] STEP: Configuring hugepages
I0518 06:10:13.671] May 18 06:08:29.109: INFO: Hugepages total is set to 8
I0518 06:10:13.671] [BeforeEach] with static policy
I0518 06:10:13.672]   test/e2e_node/util.go:165
I0518 06:10:13.672] STEP: Stopping the kubelet
I0518 06:10:13.672] May 18 06:08:29.174: INFO: Get running kubelet with systemctl:   UNIT                            LOAD   ACTIVE SUB     DESCRIPTION
I0518 06:10:13.673]   kubelet-20220518T060542.service loaded active running /tmp/node-e2e-20220518T060542/kubelet --kubeconfig /tmp/node-e2e-20220518T060542/kubeconfig --root-dir /var/lib/kubelet --v 4 --logtostderr --feature-gates NodeSwap=true --hostname-override n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --container-runtime-endpoint unix:///var/run/crio/crio.sock --config /tmp/node-e2e-20220518T060542/kubelet-config --fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service
I0518 06:10:13.673] 
I0518 06:10:13.673] LOAD   = Reflects whether the unit definition was properly loaded.
I0518 06:10:13.673] ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
I0518 06:10:13.673] SUB    = The low-level unit activation state, values depend on unit type.
I0518 06:10:13.673] 1 loaded units listed.
I0518 06:10:13.673] , kubelet-20220518T060542
I0518 06:10:13.674] W0518 06:08:29.331350    2450 util.go:388] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": read tcp 127.0.0.1:58500->127.0.0.1:10248: read: connection reset by peer
I0518 06:10:13.674] STEP: Starting the kubelet
I0518 06:10:13.674] W0518 06:08:29.428045    2450 util.go:388] Health check on "http://127.0.0.1:10248/healthz" failed, error=Head "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
I0518 06:10:13.675] May 18 06:08:34.431: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0518 06:10:13.675] May 18 06:08:35.436: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0518 06:10:13.675] May 18 06:08:36.440: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0518 06:10:13.676] May 18 06:08:37.443: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0518 06:10:13.676] May 18 06:08:38.447: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
I0518 06:10:13.676] May 18 06:08:39.450: INFO: Condition Ready of node n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f is false instead of true. Reason: KubeletNotReady, message: [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
... skipping 61 lines ...
I0518 06:10:13.690] May 18 06:09:20.489: INFO: Waiting for pod memory-manager-static2gpn6 to disappear
I0518 06:10:13.690] May 18 06:09:20.493: INFO: Pod memory-manager-static2gpn6 still exists
I0518 06:10:13.690] May 18 06:09:22.488: INFO: Waiting for pod memory-manager-static2gpn6 to disappear
I0518 06:10:13.690] May 18 06:09:22.492: INFO: Pod memory-manager-static2gpn6 still exists
I0518 06:10:13.690] 
I0518 06:10:13.691] Failure Finished Test Suite on Host n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f
I0518 06:10:13.692] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine core@35.247.27.146 -- sudo sh -c 'cd /tmp/node-e2e-20220518T060542 && timeout -k 30s 10800.000000s ./ginkgo --nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --runtime-config= --logtostderr --v 4 --node-name=n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f --report-dir=/tmp/node-e2e-20220518T060542/results --report-prefix=fedora --image-description="fedora-coreos-35-20220424-3-0-gcp-x86-64" --feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}"'] failed with error: exit status 255, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r core@35.247.27.146:/tmp/node-e2e-20220518T060542/results/*.log /workspace/_artifacts/n1-standard-2-fedora-coreos-35-20220424-3-0-gcp-x86-64-3aa3f85f] failed with error: exit status 1]
I0518 06:10:13.692] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0518 06:10:13.693] <                              FINISH TEST                               <
I0518 06:10:13.693] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0518 06:10:13.693] 
I0518 06:10:13.693] Failure: 1 errors encountered.
W0518 06:10:13.794] exit status 1
W0518 06:10:13.813] 2022/05/18 06:10:13 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-008 --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=3h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml' finished in 15m58.726637525s
W0518 06:10:13.813] 2022/05/18 06:10:13 e2e.go:574: Dumping logs locally to: /workspace/_artifacts
W0518 06:10:13.813] 2022/05/18 06:10:13 process.go:153: Running: ./cluster/log-dump/log-dump.sh /workspace/_artifacts
W0518 06:10:13.888] Trying to find master named 'bootstrap-e2e-master'
W0518 06:10:13.888] Looking for address 'bootstrap-e2e-master-ip'
I0518 06:10:13.989] Checking for custom logdump instances, if any
I0518 06:10:13.989] ----------------------------------------------------------------------------------------------------
... skipping 4 lines ...
I0518 06:10:13.990] Sourcing kube-util.sh
I0518 06:10:13.990] Detecting project
I0518 06:10:13.990] Project: k8s-infra-e2e-boskos-008
I0518 06:10:13.991] Network Project: k8s-infra-e2e-boskos-008
I0518 06:10:13.991] Zone: us-west1-b
I0518 06:10:13.991] Dumping logs from master locally to '/workspace/_artifacts'
W0518 06:10:14.798] ERROR: (gcloud.compute.addresses.describe) Could not fetch resource:
W0518 06:10:14.798]  - The resource 'projects/k8s-infra-e2e-boskos-008/regions/us-west1/addresses/bootstrap-e2e-master-ip' was not found
W0518 06:10:14.798] 
W0518 06:10:14.993] Could not detect Kubernetes master node.  Make sure you've launched a cluster with 'kube-up.sh'
I0518 06:10:15.094] Master not detected. Is the cluster up?
I0518 06:10:15.094] Dumping logs from nodes locally to '/workspace/_artifacts'
I0518 06:10:15.094] Detecting nodes in the cluster
... skipping 4 lines ...
W0518 06:10:19.766] NODE_NAMES=
W0518 06:10:19.768] 2022/05/18 06:10:19 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 5.955944376s
W0518 06:10:19.768] 2022/05/18 06:10:19 node.go:53: Noop - Node Down()
W0518 06:10:19.768] 2022/05/18 06:10:19 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0518 06:10:19.769] 2022/05/18 06:10:19 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0518 06:10:20.120] 2022/05/18 06:10:20 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 351.670035ms
W0518 06:10:20.144] 2022/05/18 06:10:20 main.go:331: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-infra-e2e-boskos-008 --zone=us-west1-b --ssh-user=core --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=1 --focus="\[Serial\]" --skip="\[Flaky\]|\[Slow\]|\[Benchmark\]|\[NodeSpecialFeature:.+\]|\[NodeSpecialFeature\]|\[NodeAlphaFeature:.+\]|\[NodeAlphaFeature\]|\[NodeFeature:Eviction\]" --test_args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\"name\": \"crio.log\", \"journalctl\": [\"-u\", \"crio\"]}" --test-timeout=3h0m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml: exit status 1]
W0518 06:10:20.144] Traceback (most recent call last):
W0518 06:10:20.145]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module>
W0518 06:10:20.146]     main(parse_args())
W0518 06:10:20.146]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main
W0518 06:10:20.146]     mode.start(runner_args)
W0518 06:10:20.146]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start
W0518 06:10:20.146]     check_env(env, self.command, *args)
W0518 06:10:20.147]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0518 06:10:20.147]     subprocess.check_call(cmd, env=env)
W0518 06:10:20.147]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0518 06:10:20.147]     raise CalledProcessError(retcode, cmd)
W0518 06:10:20.148] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--gcp-zone=us-west1-b', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/swap/image-config-swap-fedora.yaml', '--node-test-args=--feature-gates=NodeSwap=true --container-runtime-endpoint=unix:///var/run/crio/crio.sock --container-runtime-process-name=/usr/local/bin/crio --container-runtime-pid-file= --kubelet-flags="--fail-swap-on=false --cgroup-driver=systemd --cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/crio.service --kubelet-cgroups=/system.slice/kubelet.service" --extra-log="{\\"name\\": \\"crio.log\\", \\"journalctl\\": [\\"-u\\", \\"crio\\"]}"', '--node-tests=true', '--test_args=--nodes=1 --focus="\\[Serial\\]" --skip="\\[Flaky\\]|\\[Slow\\]|\\[Benchmark\\]|\\[NodeSpecialFeature:.+\\]|\\[NodeSpecialFeature\\]|\\[NodeAlphaFeature:.+\\]|\\[NodeAlphaFeature\\]|\\[NodeFeature:Eviction\\]"', '--timeout=180m')' returned non-zero exit status 1
E0518 06:10:20.154] Command failed
I0518 06:10:20.154] process 321 exited with code 1 after 16.1m
E0518 06:10:20.154] FAIL: ci-kubernetes-node-swap-fedora-serial
I0518 06:10:20.154] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0518 06:10:20.927] Activated service account credentials for: [prow-build@k8s-infra-prow-build.iam.gserviceaccount.com]
I0518 06:10:21.073] process 56810 exited with code 0 after 0.0m
I0518 06:10:21.074] Call:  gcloud config get-value account
I0518 06:10:21.743] process 56824 exited with code 0 after 0.0m
I0518 06:10:21.744] Will upload results to gs://kubernetes-jenkins/logs using prow-build@k8s-infra-prow-build.iam.gserviceaccount.com
I0518 06:10:21.744] Upload result and artifacts...
I0518 06:10:21.744] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora-serial/1526802926893273088
I0518 06:10:21.745] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora-serial/1526802926893273088/artifacts
W0518 06:10:22.907] CommandException: One or more URLs matched no objects.
E0518 06:10:23.129] Command failed
I0518 06:10:23.129] process 56838 exited with code 1 after 0.0m
W0518 06:10:23.129] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora-serial/1526802926893273088/artifacts not exist yet
I0518 06:10:23.130] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-node-swap-fedora-serial/1526802926893273088/artifacts
I0518 06:10:25.058] process 56978 exited with code 0 after 0.0m
I0518 06:10:25.059] Call:  git rev-parse HEAD
I0518 06:10:25.062] process 57494 exited with code 0 after 0.0m
... skipping 13 lines ...