This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 7 succeeded
Started2020-02-15 18:27
Elapsed39m3s
Revision
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/26a8ff27-26f9-4df9-a37a-80954d479c37/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/26a8ff27-26f9-4df9-a37a-80954d479c37/targets/test
uploadercrier
infra-commitf5dd3ee0e
job-versionv1.13.13-beta.0.1+874f0559d9b358
podde055aef-5020-11ea-9bea-16a0f55e352c
repok8s.io/kubernetes
repo-commit874f0559d9b358f87959ec0bb7645d9cb3d5f7ba
repos{u'k8s.io/kubernetes': u'release-1.13', u'github.com/containerd/cri': u'release/1.2'}
revisionv1.13.13-beta.0.1+874f0559d9b358

Test Failures


Node Tests 36m34s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 7 Passed Tests

Error lines from build-log.txt

... skipping 189 lines ...
W0215 18:30:39.943] # fetch_metadata fetches metadata from GCE metadata server.
W0215 18:30:39.943] # Var set:
W0215 18:30:39.943] # 1. Metadata key: key of the metadata.
W0215 18:30:39.943] fetch_metadata() {
W0215 18:30:39.943]   local -r key=$1
W0215 18:30:39.943]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0215 18:30:39.943]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0215 18:30:39.944]     grep -q "^${key}$"; then
W0215 18:30:39.944]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0215 18:30:39.944]       "${attributes}/${key}"
W0215 18:30:39.944]   fi
W0215 18:30:39.944] }
W0215 18:30:39.944] 
W0215 18:30:39.944] # fetch_env fetches environment variables from GCE metadata server
W0215 18:30:39.945] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0215 18:30:39.953]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0215 18:30:39.953]     deploy_path="${deploy_path}/${deploy_dir}"
W0215 18:30:39.953]   fi
W0215 18:30:39.953] 
W0215 18:30:39.953]   # TODO(random-liu): Put version into the metadata instead of
W0215 18:30:39.954]   # deciding it in cloud init. This may cause issue to reboot test.
W0215 18:30:39.954]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0215 18:30:39.954]     https://storage.googleapis.com/${deploy_path}/latest)
W0215 18:30:39.954] fi
W0215 18:30:39.954] 
W0215 18:30:39.954] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0215 18:30:39.954] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0215 18:30:39.955] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 138 lines ...
W0215 18:30:39.974]       [Service]
W0215 18:30:39.974]       Type=oneshot
W0215 18:30:39.974]       RemainAfterExit=yes
W0215 18:30:39.974]       ExecStartPre=/bin/mkdir -p /home/containerd
W0215 18:30:39.974]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0215 18:30:39.974]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0215 18:30:39.975]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0215 18:30:39.975]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0215 18:30:39.975]       ExecStart=/home/containerd/configure.sh
W0215 18:30:39.975] 
W0215 18:30:39.975]       [Install]
W0215 18:30:39.975]       WantedBy=containerd.target
W0215 18:30:39.975] 
... skipping 65 lines ...
W0215 18:30:39.982]       [Service]
W0215 18:30:39.982]       Type=oneshot
W0215 18:30:39.982]       RemainAfterExit=yes
W0215 18:30:39.983]       ExecStartPre=/bin/mkdir -p /home/containerd
W0215 18:30:39.983]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0215 18:30:39.983]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0215 18:30:39.983]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0215 18:30:39.983]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0215 18:30:39.984]       ExecStart=/home/containerd/configure.sh
W0215 18:30:39.984] 
W0215 18:30:39.984]       [Install]
W0215 18:30:39.984]       WantedBy=containerd.target
W0215 18:30:39.984] 
... skipping 74 lines ...
W0215 18:30:40.000] # fetch_metadata fetches metadata from GCE metadata server.
W0215 18:30:40.000] # Var set:
W0215 18:30:40.001] # 1. Metadata key: key of the metadata.
W0215 18:30:40.001] fetch_metadata() {
W0215 18:30:40.001]   local -r key=$1
W0215 18:30:40.001]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0215 18:30:40.006]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0215 18:30:40.006]     grep -q "^${key}$"; then
W0215 18:30:40.006]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0215 18:30:40.006]       "${attributes}/${key}"
W0215 18:30:40.006]   fi
W0215 18:30:40.007] }
W0215 18:30:40.007] 
W0215 18:30:40.007] # fetch_env fetches environment variables from GCE metadata server
W0215 18:30:40.007] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0215 18:30:40.017]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0215 18:30:40.017]     deploy_path="${deploy_path}/${deploy_dir}"
W0215 18:30:40.017]   fi
W0215 18:30:40.017] 
W0215 18:30:40.017]   # TODO(random-liu): Put version into the metadata instead of
W0215 18:30:40.017]   # deciding it in cloud init. This may cause issue to reboot test.
W0215 18:30:40.018]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0215 18:30:40.018]     https://storage.googleapis.com/${deploy_path}/latest)
W0215 18:30:40.018] fi
W0215 18:30:40.018] 
W0215 18:30:40.018] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0215 18:30:40.018] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0215 18:30:40.018] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 125 lines ...
W0215 18:34:55.451] I0215 18:34:55.451256    4281 utils.go:82] Configure iptables firewall rules on "tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1"
W0215 18:34:57.157] I0215 18:34:57.157256    4281 utils.go:117] Killing any existing node processes on "tmp-node-e2e-445e0156-cos-stable-60-9592-84-0"
W0215 18:34:57.386] I0215 18:34:57.385900    4281 utils.go:117] Killing any existing node processes on "tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1"
W0215 18:34:58.297] I0215 18:34:58.297213    4281 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0215 18:34:58.298] I0215 18:34:58.297253    4281 node_e2e.go:164] Starting tests on "tmp-node-e2e-445e0156-cos-stable-60-9592-84-0"
W0215 18:34:58.748] I0215 18:34:58.748081    4281 node_e2e.go:164] Starting tests on "tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1"
W0215 19:04:20.471] I0215 19:04:20.470474    4281 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0215 19:04:20.904] I0215 19:04:20.904041    4281 remote.go:207] Failed to run journactl (normal if it doesn't exist on the node): command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.247.33.195 -- sudo sh -c 'journalctl --system --all > /tmp/20200215T190420-system.log'] failed with error: exit status 255, output: "prow@35.247.33.195: Permission denied (publickey).\r\n"
W0215 19:04:20.905] I0215 19:04:20.904162    4281 remote.go:122] Copying test artifacts from "tmp-node-e2e-445e0156-cos-stable-60-9592-84-0"
W0215 19:04:21.831] I0215 19:04:21.831364    4281 run_remote.go:718] Deleting instance "tmp-node-e2e-445e0156-cos-stable-60-9592-84-0"
I0215 19:04:22.383] 
I0215 19:04:22.384] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0215 19:04:22.384] >                              START TEST                                >
I0215 19:04:22.384] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
... skipping 104 lines ...
I0215 19:04:22.405] STEP: Creating a kubernetes client
I0215 19:04:22.405] STEP: Building a namespace api object, basename init-container
I0215 19:04:22.405] Feb 15 18:36:22.174: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled.
I0215 19:04:22.406] Feb 15 18:36:22.174: INFO: Skipping waiting for service account
I0215 19:04:22.406] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:04:22.406]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0215 19:04:22.406] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0215 19:04:22.406]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:04:22.406] STEP: creating the pod
I0215 19:04:22.406] Feb 15 18:36:22.174: INFO: PodSpec: initContainers in spec.initContainers
I0215 19:04:22.407] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:04:22.407]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:04:22.407] Feb 15 18:36:29.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
I0215 19:04:22.408] Feb 15 18:36:35.436: INFO: namespace e2e-tests-init-container-swdbr deletion completed in 6.039345748s
I0215 19:04:22.408] 
I0215 19:04:22.408] 
I0215 19:04:22.408] • [SLOW TEST:13.360 seconds]
I0215 19:04:22.409] [k8s.io] InitContainer [NodeConformance]
I0215 19:04:22.409] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0215 19:04:22.409]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0215 19:04:22.409]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:04:22.409] ------------------------------
I0215 19:04:22.409] [BeforeEach] [sig-node] Downward API
I0215 19:04:22.409]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
I0215 19:04:22.410] STEP: Creating a kubernetes client
I0215 19:04:22.410] STEP: Building a namespace api object, basename downward-api
... skipping 786 lines ...
I0215 19:04:22.521] Feb 15 18:37:03.612: INFO: pod-secrets-1dbd5d6e-5022-11ea-9315-42010a8a001e               tmp-node-e2e-445e0156-cos-stable-60-9592-84-0  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:44 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:46 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:46 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:44 +0000 UTC  }]
I0215 19:04:22.522] Feb 15 18:37:03.612: INFO: stats-busybox-0                                                tmp-node-e2e-445e0156-cos-stable-60-9592-84-0  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:22 +0000 UTC  }]
I0215 19:04:22.522] Feb 15 18:37:03.612: INFO: stats-busybox-1                                                tmp-node-e2e-445e0156-cos-stable-60-9592-84-0  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:28 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:28 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 18:36:22 +0000 UTC  }]
I0215 19:04:22.522] Feb 15 18:37:03.612: INFO: 
I0215 19:04:22.522] Feb 15 18:37:03.613: INFO: 
I0215 19:04:22.523] Logging node info for node tmp-node-e2e-445e0156-cos-stable-60-9592-84-0
I0215 19:04:22.528] Feb 15 18:37:03.615: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:tmp-node-e2e-445e0156-cos-stable-60-9592-84-0,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/tmp-node-e2e-445e0156-cos-stable-60-9592-84-0,UID:0fa5136f-5022-11ea-99a0-42010a8a001e,ResourceVersion:384,Generation:0,CreationTimestamp:2020-02-15 18:36:20 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/hostname: tmp-node-e2e-445e0156-cos-stable-60-9592-84-0,},Annotations:map[string]string{volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16701562880 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885531136 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15031406568 0} {<nil>} 15031406568 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623387136 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-02-15 18:37:01 +0000 UTC 2020-02-15 18:36:13 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-15 18:37:01 +0000 UTC 2020-02-15 18:36:13 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-15 18:37:01 +0000 UTC 2020-02-15 18:36:13 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-15 18:37:01 +0000 UTC 2020-02-15 18:36:13 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.138.0.30} {Hostname tmp-node-e2e-445e0156-cos-stable-60-9592-84-0}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7f496dcef8881a53a41d2ca87385f1e2,SystemUUID:7F496DCE-F888-1A53-A41D-2CA87385F1E2,BootID:a0601335-5445-4823-a721-b2107bfb8cc6,KernelVersion:4.4.64+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.12,KubeletVersion:v1.13.13-beta.0.1+874f0559d9b358,KubeProxyVersion:v1.13.13-beta.0.1+874f0559d9b358,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0] 242137147} {[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0] 111775822} {[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0] 82348896} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64@sha256:229d66a7fd93518588ced42666d631a3e3d1fa4757d0cb7bb0a110302195b189 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0] 39643557} {[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64@sha256:e5aca92206c7bdc2be473f71c7917c946f1140bd71b93ca2449457109e5f43c2 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0] 39642590} {[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2] 33121906} {[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest] 30530401} {[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine] 6976771} {[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa] 6819465} {[gcr.io/kubernetes-e2e-test-images/net@sha256:973f47a88f50ccd7800f6ec300e664461e7c011c2da3a33edf32a73dd9ff9c01 gcr.io/kubernetes-e2e-test-images/net:1.0] 5704387} {[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0] 4004104} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 3054649} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:748662321b68a4b73b5a56961b61b980ad3683fc6bcae62c1306018fcdba1809 gcr.io/kubernetes-e2e-test-images/liveness:1.0] 1743226} {[k8s.gcr.io/stress:v1] 1558004} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29] 729986} {[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff] 676941} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 317164}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0215 19:04:22.528] Feb 15 18:37:03.615: INFO: 
I0215 19:04:22.528] Logging kubelet events for node tmp-node-e2e-445e0156-cos-stable-60-9592-84-0
I0215 19:04:22.528] Feb 15 18:37:03.616: INFO: 
I0215 19:04:22.528] Logging pods the kubelet thinks is on node tmp-node-e2e-445e0156-cos-stable-60-9592-84-0
I0215 19:04:22.528] Feb 15 18:37:03.618: INFO: static-pod-1844a595-5022-11ea-9382-42010a8a001e-tmp-node-e2e-445e0156-cos-stable-60-9592-84-0 started at <nil> (0+0 container statuses recorded)
I0215 19:04:22.529] Feb 15 18:37:03.618: INFO: pod-secrets-1dbd5d6e-5022-11ea-9315-42010a8a001e started at 2020-02-15 18:36:44 +0000 UTC (0+3 container statuses recorded)
... skipping 23 lines ...
I0215 19:04:22.533] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0215 19:04:22.533]   when querying /stats/summary
I0215 19:04:22.533]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:41
I0215 19:04:22.533]     should report resource usage through the stats api [It]
I0215 19:04:22.534]     _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:52
I0215 19:04:22.534] 
I0215 19:04:22.534]     Failed after 15.141s.
I0215 19:04:22.534]     Expected
I0215 19:04:22.534]         <string>: Summary
I0215 19:04:22.534]     to match fields: {
I0215 19:04:22.534]     .Node.SystemContainers[pods].CPU:
I0215 19:04:22.534]     	Expected
I0215 19:04:22.534]     	    <string>: CPUStats
... skipping 1770 lines ...
I0215 19:04:22.802] STEP: verifying the pod is in kubernetes
I0215 19:04:22.802] STEP: updating the pod
I0215 19:04:22.802] Feb 15 18:41:12.131: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bba82ce3-5022-11ea-989f-42010a8a001e"
I0215 19:04:22.802] Feb 15 18:41:12.131: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bba82ce3-5022-11ea-989f-42010a8a001e" in namespace "e2e-tests-pods-wwvbz" to be "terminated due to deadline exceeded"
I0215 19:04:22.803] Feb 15 18:41:12.132: INFO: Pod "pod-update-activedeadlineseconds-bba82ce3-5022-11ea-989f-42010a8a001e": Phase="Running", Reason="", readiness=true. Elapsed: 784.395µs
I0215 19:04:22.803] Feb 15 18:41:14.133: INFO: Pod "pod-update-activedeadlineseconds-bba82ce3-5022-11ea-989f-42010a8a001e": Phase="Running", Reason="", readiness=true. Elapsed: 2.002084096s
I0215 19:04:22.803] Feb 15 18:41:16.135: INFO: Pod "pod-update-activedeadlineseconds-bba82ce3-5022-11ea-989f-42010a8a001e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.003235199s
I0215 19:04:22.803] Feb 15 18:41:16.135: INFO: Pod "pod-update-activedeadlineseconds-bba82ce3-5022-11ea-989f-42010a8a001e" satisfied condition "terminated due to deadline exceeded"
I0215 19:04:22.803] [AfterEach] [k8s.io] Pods
I0215 19:04:22.804]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:04:22.804] Feb 15 18:41:16.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0215 19:04:22.804] STEP: Destroying namespace "e2e-tests-pods-wwvbz" for this suite.
I0215 19:04:22.804] Feb 15 18:41:22.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 253 lines ...
I0215 19:04:22.843] [BeforeEach] [k8s.io] Security Context
I0215 19:04:22.843]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0215 19:04:22.843] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [NodeConformance]
I0215 19:04:22.844]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:135
I0215 19:04:22.844] Feb 15 18:41:54.315: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-d64b716a-5022-11ea-910c-42010a8a001e" in namespace "e2e-tests-security-context-test-5bwnm" to be "success or failure"
I0215 19:04:22.844] Feb 15 18:41:54.315: INFO: Pod "busybox-readonly-true-d64b716a-5022-11ea-910c-42010a8a001e": Phase="Pending", Reason="", readiness=false. Elapsed: 844.076µs
I0215 19:04:22.844] Feb 15 18:41:56.317: INFO: Pod "busybox-readonly-true-d64b716a-5022-11ea-910c-42010a8a001e": Phase="Failed", Reason="", readiness=false. Elapsed: 2.002781731s
I0215 19:04:22.844] Feb 15 18:41:56.317: INFO: Pod "busybox-readonly-true-d64b716a-5022-11ea-910c-42010a8a001e" satisfied condition "success or failure"
I0215 19:04:22.845] [AfterEach] [k8s.io] Security Context
I0215 19:04:22.845]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:04:22.845] Feb 15 18:41:56.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0215 19:04:22.845] STEP: Destroying namespace "e2e-tests-security-context-test-5bwnm" for this suite.
I0215 19:04:22.845] Feb 15 18:42:02.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 851 lines ...
I0215 19:04:22.979] Feb 15 18:38:26.161: INFO: Skipping waiting for service account
I0215 19:04:22.979] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0215 19:04:22.979]   _output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0215 19:04:22.979] STEP: create the container
I0215 19:04:22.980] STEP: check the container status
I0215 19:04:22.980] STEP: delete the container
I0215 19:04:22.980] Feb 15 18:43:26.830: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0215 19:04:22.980] STEP: create the container
I0215 19:04:22.980] STEP: check the container status
I0215 19:04:22.980] STEP: delete the container
I0215 19:04:22.980] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0215 19:04:22.981]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:04:22.981] Feb 15 18:43:28.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 1452 lines ...
I0215 19:04:23.207]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
I0215 19:04:23.207] STEP: Creating a kubernetes client
I0215 19:04:23.208] STEP: Building a namespace api object, basename init-container
I0215 19:04:23.208] Feb 15 18:45:13.963: INFO: Skipping waiting for service account
I0215 19:04:23.208] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:04:23.208]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0215 19:04:23.208] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0215 19:04:23.208]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:04:23.208] STEP: creating the pod
I0215 19:04:23.209] Feb 15 18:45:13.964: INFO: PodSpec: initContainers in spec.initContainers
I0215 19:04:23.215] Feb 15 18:45:56.250: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4d53dae5-5023-11ea-9382-42010a8a001e", GenerateName:"", Namespace:"e2e-tests-init-container-kb6pk", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-kb6pk/pods/pod-init-4d53dae5-5023-11ea-9382-42010a8a001e", UID:"4d540852-5023-11ea-99a0-42010a8a001e", ResourceVersion:"3424", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717389113, loc:(*time.Location)(0x9d4d080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"964006398"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012346e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-445e0156-cos-stable-60-9592-84-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010a8120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001234750)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001234770)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001234780), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001234784)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717389113, loc:(*time.Location)(0x9d4d080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717389113, loc:(*time.Location)(0x9d4d080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717389113, loc:(*time.Location)(0x9d4d080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717389113, loc:(*time.Location)(0x9d4d080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.30", PodIP:"10.100.0.171", StartTime:(*v1.Time)(0xc0013d1060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0013c0af0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0013c0b60)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"containerd://3f8c26b200baf64c7ac66ab2dbd0a99cc42645588d4cd9a2ca11f42ed477f99b"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013d10c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0013d1100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0215 19:04:23.216] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:04:23.216]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:04:23.216] Feb 15 18:45:56.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0215 19:04:23.216] STEP: Destroying namespace "e2e-tests-init-container-kb6pk" for this suite.
I0215 19:04:23.216] Feb 15 18:46:18.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0215 19:04:23.216] Feb 15 18:46:18.281: INFO: namespace: e2e-tests-init-container-kb6pk, resource: bindings, ignored listing per whitelist
I0215 19:04:23.216] Feb 15 18:46:18.294: INFO: namespace e2e-tests-init-container-kb6pk deletion completed in 22.042062953s
I0215 19:04:23.217] 
I0215 19:04:23.217] 
I0215 19:04:23.217] • [SLOW TEST:64.343 seconds]
I0215 19:04:23.217] [k8s.io] InitContainer [NodeConformance]
I0215 19:04:23.217] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0215 19:04:23.217]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0215 19:04:23.218]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:04:23.218] ------------------------------
I0215 19:04:23.218] [BeforeEach] [sig-storage] ConfigMap
I0215 19:04:23.219]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
I0215 19:04:23.219] STEP: Creating a kubernetes client
I0215 19:04:23.219] STEP: Building a namespace api object, basename configmap
... skipping 79 lines ...
I0215 19:04:23.228]   should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
I0215 19:04:23.228]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:662
I0215 19:04:23.228] ------------------------------
I0215 19:04:23.228] I0215 19:04:18.469491    1222 e2e_node_suite_test.go:187] Stopping node services...
I0215 19:04:23.228] I0215 19:04:18.469515    1222 server.go:258] Kill server "services"
I0215 19:04:23.229] I0215 19:04:18.469524    1222 server.go:295] Killing process 1366 (services) with -TERM
I0215 19:04:23.229] E0215 19:04:18.575658    1222 services.go:89] Failed to stop services: error stopping "services": waitid: no child processes
I0215 19:04:23.229] I0215 19:04:18.575674    1222 server.go:258] Kill server "kubelet"
I0215 19:04:23.229] I0215 19:04:18.676060    1222 services.go:146] Fetching log files...
I0215 19:04:23.229] I0215 19:04:18.676232    1222 services.go:155] Get log file "kern.log" with journalctl command [-k].
I0215 19:04:23.229] I0215 19:04:18.731963    1222 services.go:155] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0215 19:04:23.230] I0215 19:04:18.742577    1222 services.go:155] Get log file "docker.log" with journalctl command [-u docker].
I0215 19:04:23.230] I0215 19:04:18.757609    1222 services.go:155] Get log file "containerd.log" with journalctl command [-u containerd].
I0215 19:04:23.230] I0215 19:04:19.281750    1222 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20200215T183446.service].
I0215 19:04:23.230] I0215 19:04:20.382669    1222 e2e_node_suite_test.go:192] Tests Finished
I0215 19:04:23.230] 
I0215 19:04:23.230] 
I0215 19:04:23.230] 
I0215 19:04:23.230] Summarizing 1 Failure:
I0215 19:04:23.231] 
I0215 19:04:23.231] [Fail] [k8s.io] Summary API [NodeConformance] when querying /stats/summary [It] should report resource usage through the stats api 
I0215 19:04:23.231] _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:334
I0215 19:04:23.231] 
I0215 19:04:23.231] Ran 157 of 285 Specs in 1757.909 seconds
I0215 19:04:23.231] FAIL! -- 156 Passed | 1 Failed | 0 Pending | 128 Skipped 
I0215 19:04:23.231] 
I0215 19:04:23.232] Ginkgo ran 1 suite in 29m21.485989717s
I0215 19:04:23.232] Test Suite Failed
I0215 19:04:23.232] 
I0215 19:04:23.232] Failure Finished Test Suite on Host tmp-node-e2e-445e0156-cos-stable-60-9592-84-0
I0215 19:04:23.233] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.247.33.195 -- sudo sh -c 'cd /tmp/node-e2e-20200215T183446 && timeout -k 30s 3900.000000s ./ginkgo --nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-445e0156-cos-stable-60-9592-84-0 --report-dir=/tmp/node-e2e-20200215T183446/results --report-prefix=cos-stable --image-description="cos-stable-60-9592-84-0" --kubelet-flags=--experimental-mounter-path=/tmp/node-e2e-20200215T183446/mounter --kubelet-flags=--experimental-kernel-memcg-notification=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"'] failed with error: exit status 1, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.247.33.195:/tmp/node-e2e-20200215T183446/results/*.log /workspace/_artifacts/tmp-node-e2e-445e0156-cos-stable-60-9592-84-0] failed with error: exit status 1]
I0215 19:04:23.233] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0215 19:04:23.233] <                              FINISH TEST                               <
I0215 19:04:23.233] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0215 19:04:23.234] 
W0215 19:06:39.314] I0215 19:06:39.314281    4281 remote.go:122] Copying test artifacts from "tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1"
W0215 19:06:40.470] I0215 19:06:40.470156    4281 run_remote.go:718] Deleting instance "tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1"
... skipping 524 lines ...
I0215 19:06:41.460] STEP: verifying the pod is in kubernetes
I0215 19:06:41.460] STEP: updating the pod
I0215 19:06:41.460] Feb 15 18:36:53.038: INFO: Successfully updated pod "pod-update-activedeadlineseconds-213571ee-5022-11ea-9b74-42010a8a0015"
I0215 19:06:41.460] Feb 15 18:36:53.038: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-213571ee-5022-11ea-9b74-42010a8a0015" in namespace "e2e-tests-pods-blszh" to be "terminated due to deadline exceeded"
I0215 19:06:41.461] Feb 15 18:36:53.042: INFO: Pod "pod-update-activedeadlineseconds-213571ee-5022-11ea-9b74-42010a8a0015": Phase="Running", Reason="", readiness=true. Elapsed: 3.967638ms
I0215 19:06:41.461] Feb 15 18:36:55.044: INFO: Pod "pod-update-activedeadlineseconds-213571ee-5022-11ea-9b74-42010a8a0015": Phase="Running", Reason="", readiness=true. Elapsed: 2.00615554s
I0215 19:06:41.461] Feb 15 18:36:57.046: INFO: Pod "pod-update-activedeadlineseconds-213571ee-5022-11ea-9b74-42010a8a0015": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.007774688s
I0215 19:06:41.461] Feb 15 18:36:57.046: INFO: Pod "pod-update-activedeadlineseconds-213571ee-5022-11ea-9b74-42010a8a0015" satisfied condition "terminated due to deadline exceeded"
I0215 19:06:41.462] [AfterEach] [k8s.io] Pods
I0215 19:06:41.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:06:41.462] Feb 15 18:36:57.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0215 19:06:41.462] STEP: Destroying namespace "e2e-tests-pods-blszh" for this suite.
I0215 19:06:41.463] Feb 15 18:37:03.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1304 lines ...
I0215 19:06:41.681]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
I0215 19:06:41.681] STEP: Creating a kubernetes client
I0215 19:06:41.681] STEP: Building a namespace api object, basename init-container
I0215 19:06:41.681] Feb 15 18:38:51.815: INFO: Skipping waiting for service account
I0215 19:06:41.681] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:06:41.682]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0215 19:06:41.682] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0215 19:06:41.682]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:06:41.682] STEP: creating the pod
I0215 19:06:41.682] Feb 15 18:38:51.815: INFO: PodSpec: initContainers in spec.initContainers
I0215 19:06:41.682] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:06:41.682]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:06:41.682] Feb 15 18:38:53.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
I0215 19:06:41.683] Feb 15 18:38:59.745: INFO: namespace e2e-tests-init-container-ww6hv deletion completed in 6.059817826s
I0215 19:06:41.683] 
I0215 19:06:41.683] 
I0215 19:06:41.683] • [SLOW TEST:7.946 seconds]
I0215 19:06:41.683] [k8s.io] InitContainer [NodeConformance]
I0215 19:06:41.683] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0215 19:06:41.684]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0215 19:06:41.684]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:06:41.684] ------------------------------
I0215 19:06:41.684] [BeforeEach] [k8s.io] Docker Containers
I0215 19:06:41.684]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
I0215 19:06:41.684] STEP: Creating a kubernetes client
I0215 19:06:41.684] STEP: Building a namespace api object, basename containers
... skipping 762 lines ...
I0215 19:06:41.808]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
I0215 19:06:41.808] STEP: Creating a kubernetes client
I0215 19:06:41.808] STEP: Building a namespace api object, basename init-container
I0215 19:06:41.808] Feb 15 18:40:11.865: INFO: Skipping waiting for service account
I0215 19:06:41.809] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:06:41.809]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0215 19:06:41.809] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0215 19:06:41.809]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:06:41.809] STEP: creating the pod
I0215 19:06:41.810] Feb 15 18:40:11.865: INFO: PodSpec: initContainers in spec.initContainers
I0215 19:06:41.816] Feb 15 18:40:59.513: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9943401f-5022-11ea-91af-42010a8a0015", GenerateName:"", Namespace:"e2e-tests-init-container-thhfd", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-thhfd/pods/pod-init-9943401f-5022-11ea-91af-42010a8a0015", UID:"9943dc8d-5022-11ea-bb3c-42010a8a0015", ResourceVersion:"1743", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717388811, loc:(*time.Location)(0x9d4d080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"time":"865299768", "name":"foo"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0012d5e00), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0010c1140), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0012d5e70)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0012d5e90)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0012d5ea0), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0012d5ea4)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717388811, loc:(*time.Location)(0x9d4d080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717388811, loc:(*time.Location)(0x9d4d080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717388811, loc:(*time.Location)(0x9d4d080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717388811, loc:(*time.Location)(0x9d4d080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.21", PodIP:"10.100.0.88", StartTime:(*v1.Time)(0xc0012e06a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003e6700)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003e6770)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"containerd://4fe7c7b03d615a84e8d38477799f25594ba8d14340c2f3a9f47ca3f31b011724"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0012e0700), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0012e0740), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0215 19:06:41.817] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0215 19:06:41.817]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:06:41.817] Feb 15 18:40:59.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0215 19:06:41.817] STEP: Destroying namespace "e2e-tests-init-container-thhfd" for this suite.
I0215 19:06:41.817] Feb 15 18:41:21.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0215 19:06:41.818] Feb 15 18:41:21.567: INFO: namespace: e2e-tests-init-container-thhfd, resource: bindings, ignored listing per whitelist
I0215 19:06:41.818] Feb 15 18:41:21.574: INFO: namespace e2e-tests-init-container-thhfd deletion completed in 22.052499244s
I0215 19:06:41.818] 
I0215 19:06:41.818] 
I0215 19:06:41.818] • [SLOW TEST:69.771 seconds]
I0215 19:06:41.818] [k8s.io] InitContainer [NodeConformance]
I0215 19:06:41.819] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0215 19:06:41.819]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0215 19:06:41.819]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0215 19:06:41.819] ------------------------------
I0215 19:06:41.819] [BeforeEach] [sig-api-machinery] Secrets
I0215 19:06:41.819]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:153
I0215 19:06:41.819] STEP: Creating a kubernetes client
I0215 19:06:41.820] STEP: Building a namespace api object, basename secrets
... skipping 499 lines ...
I0215 19:06:41.912] [BeforeEach] [k8s.io] Security Context
I0215 19:06:41.912]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0215 19:06:41.912] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [NodeConformance]
I0215 19:06:41.912]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:135
I0215 19:06:41.913] Feb 15 18:42:13.412: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-e1b459c1-5022-11ea-91af-42010a8a0015" in namespace "e2e-tests-security-context-test-ws824" to be "success or failure"
I0215 19:06:41.913] Feb 15 18:42:13.416: INFO: Pod "busybox-readonly-true-e1b459c1-5022-11ea-91af-42010a8a0015": Phase="Pending", Reason="", readiness=false. Elapsed: 4.711971ms
I0215 19:06:41.913] Feb 15 18:42:15.418: INFO: Pod "busybox-readonly-true-e1b459c1-5022-11ea-91af-42010a8a0015": Phase="Failed", Reason="", readiness=false. Elapsed: 2.006502683s
I0215 19:06:41.914] Feb 15 18:42:15.418: INFO: Pod "busybox-readonly-true-e1b459c1-5022-11ea-91af-42010a8a0015" satisfied condition "success or failure"
I0215 19:06:41.914] [AfterEach] [k8s.io] Security Context
I0215 19:06:41.914]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:06:41.914] Feb 15 18:42:15.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0215 19:06:41.914] STEP: Destroying namespace "e2e-tests-security-context-test-ws824" for this suite.
I0215 19:06:41.914] Feb 15 18:42:21.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1516 lines ...
I0215 19:06:42.172] Feb 15 18:39:07.906: INFO: Skipping waiting for service account
I0215 19:06:42.172] [It] should not be able to pull from private registry without secret [NodeConformance]
I0215 19:06:42.173]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:304
I0215 19:06:42.173] STEP: create the container
I0215 19:06:42.173] STEP: check the container status
I0215 19:06:42.173] STEP: delete the container
I0215 19:06:42.173] Feb 15 18:44:08.785: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0215 19:06:42.173] STEP: create the container
I0215 19:06:42.174] STEP: check the container status
I0215 19:06:42.174] STEP: delete the container
I0215 19:06:42.174] [AfterEach] [k8s.io] Container Runtime
I0215 19:06:42.174]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 19:06:42.174] Feb 15 18:44:11.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 679 lines ...
I0215 19:06:42.290] I0215 19:06:34.121895    2609 services.go:155] Get log file "containerd.log" with journalctl command [-u containerd].
I0215 19:06:42.290] I0215 19:06:35.052169    2609 services.go:155] Get log file "kubelet.log" with journalctl command [-u kubelet-20200215T183446.service].
I0215 19:06:42.290] I0215 19:06:39.282046    2609 e2e_node_suite_test.go:192] Tests Finished
I0215 19:06:42.290] 
I0215 19:06:42.290] 
I0215 19:06:42.290] Ran 157 of 285 Specs in 1897.431 seconds
I0215 19:06:42.290] SUCCESS! -- 157 Passed | 0 Failed | 0 Pending | 128 Skipped 
I0215 19:06:42.291] 
I0215 19:06:42.291] Ginkgo ran 1 suite in 31m39.909724653s
I0215 19:06:42.291] Test Suite Passed
I0215 19:06:42.291] 
I0215 19:06:42.291] Failure Finished Test Suite on Host tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1
I0215 19:06:42.291] command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.185.239.38:/tmp/node-e2e-20200215T183446/results/*.log /workspace/_artifacts/tmp-node-e2e-445e0156-ubuntu-gke-1604-xenial-v20180317-1] failed with error: exit status 1
I0215 19:06:42.291] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0215 19:06:42.291] <                              FINISH TEST                               <
I0215 19:06:42.292] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0215 19:06:42.292] 
I0215 19:06:42.292] Failure: 2 errors encountered.
W0215 19:06:42.392] exit status 1
W0215 19:06:42.724] 2020/02/15 19:06:42 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml' finished in 36m34.174600963s
W0215 19:06:42.724] 2020/02/15 19:06:42 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0215 19:06:42.724] 2020/02/15 19:06:42 node.go:52: Noop - Node Down()
W0215 19:06:42.724] 2020/02/15 19:06:42 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0215 19:06:42.725] 2020/02/15 19:06:42 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0215 19:06:42.969] 2020/02/15 19:06:42 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 243.641862ms
W0215 19:06:42.971] 2020/02/15 19:06:42 main.go:319: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1]
W0215 19:06:42.976] Traceback (most recent call last):
W0215 19:06:42.976]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 779, in <module>
W0215 19:06:42.976]     main(parse_args())
W0215 19:06:42.976]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 627, in main
W0215 19:06:42.976]     mode.start(runner_args)
W0215 19:06:42.977]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0215 19:06:42.977]     check_env(env, self.command, *args)
W0215 19:06:42.977]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0215 19:06:42.977]     subprocess.check_call(cmd, env=env)
W0215 19:06:42.977]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0215 19:06:42.978]     raise CalledProcessError(retcode, cmd)
W0215 19:06:42.979] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml', '--gcp-project=cri-containerd-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd\\"]}"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Serial\\]"', '--timeout=65m')' returned non-zero exit status 1
E0215 19:06:42.998] Command failed
I0215 19:06:42.998] process 327 exited with code 1 after 36.6m
E0215 19:06:42.998] FAIL: ci-containerd-node-e2e-1-2
I0215 19:06:42.999] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0215 19:06:43.649] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0215 19:06:43.711] process 43262 exited with code 0 after 0.0m
I0215 19:06:43.711] Call:  gcloud config get-value account
I0215 19:06:44.074] process 43274 exited with code 0 after 0.0m
I0215 19:06:44.075] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0215 19:06:44.075] Upload result and artifacts...
I0215 19:06:44.075] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1228747914088550404
I0215 19:06:44.075] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1228747914088550404/artifacts
W0215 19:06:45.123] CommandException: One or more URLs matched no objects.
E0215 19:06:45.283] Command failed
I0215 19:06:45.284] process 43286 exited with code 1 after 0.0m
W0215 19:06:45.284] Remote dir gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1228747914088550404/artifacts not exist yet
I0215 19:06:45.284] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1228747914088550404/artifacts
I0215 19:06:47.295] process 43430 exited with code 0 after 0.0m
I0215 19:06:47.296] Call:  git rev-parse HEAD
I0215 19:06:47.302] process 43962 exited with code 0 after 0.0m
... skipping 13 lines ...