This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 164 succeeded
Started2019-07-22 02:44
Elapsed40m33s
Revision
Buildergke-prow-ssd-pool-1a225945-d0kf
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/7003752e-5937-4892-a1f5-7cfd1ed70191/targets/test'}}
pod90bf1904-ac2a-11e9-b82b-365474bd0c86
resultstorehttps://source.cloud.google.com/results/invocations/7003752e-5937-4892-a1f5-7cfd1ed70191/targets/test
infra-commit6d769e14d
job-versionv1.12.11-beta.0.1+5f799a487b70ae
pod90bf1904-ac2a-11e9-b82b-365474bd0c86
repok8s.io/kubernetes
repo-commit5f799a487b70aea5e298e5f5f1e3bac904b54ef6
repos{u'k8s.io/kubernetes': u'release-1.12', u'github.com/containerd/cri': u'release/1.2'}
revisionv1.12.11-beta.0.1+5f799a487b70ae

Test Failures


Node Tests 39m19s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 164 Passed Tests

Show 119 Skipped Tests

Error lines from build-log.txt

... skipping 172 lines ...
W0722 02:46:09.396]       [Service]
W0722 02:46:09.397]       Type=oneshot
W0722 02:46:09.397]       RemainAfterExit=yes
W0722 02:46:09.397]       ExecStartPre=/bin/mkdir -p /home/containerd
W0722 02:46:09.397]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0722 02:46:09.398]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0722 02:46:09.398]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0722 02:46:09.398]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0722 02:46:09.398]       ExecStart=/home/containerd/configure.sh
W0722 02:46:09.399] 
W0722 02:46:09.399]       [Install]
W0722 02:46:09.399]       WantedBy=containerd.target
W0722 02:46:09.399] 
... skipping 74 lines ...
W0722 02:46:09.415] # fetch_metadata fetches metadata from GCE metadata server.
W0722 02:46:09.415] # Var set:
W0722 02:46:09.416] # 1. Metadata key: key of the metadata.
W0722 02:46:09.416] fetch_metadata() {
W0722 02:46:09.416]   local -r key=$1
W0722 02:46:09.416]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0722 02:46:09.417]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0722 02:46:09.417]     grep -q "^${key}$"; then
W0722 02:46:09.417]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0722 02:46:09.417]       "${attributes}/${key}"
W0722 02:46:09.418]   fi
W0722 02:46:09.418] }
W0722 02:46:09.418] 
W0722 02:46:09.418] # fetch_env fetches environment variables from GCE metadata server
W0722 02:46:09.418] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0722 02:46:09.435]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0722 02:46:09.435]     deploy_path="${deploy_path}/${deploy_dir}"
W0722 02:46:09.435]   fi
W0722 02:46:09.436] 
W0722 02:46:09.436]   # TODO(random-liu): Put version into the metadata instead of
W0722 02:46:09.436]   # deciding it in cloud init. This may cause issue to reboot test.
W0722 02:46:09.437]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0722 02:46:09.437]     https://storage.googleapis.com/${deploy_path}/latest)
W0722 02:46:09.437] fi
W0722 02:46:09.438] 
W0722 02:46:09.438] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0722 02:46:09.438] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0722 02:46:09.439] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 102 lines ...
W0722 02:46:09.776]       [Service]
W0722 02:46:09.776]       Type=oneshot
W0722 02:46:09.776]       RemainAfterExit=yes
W0722 02:46:09.776]       ExecStartPre=/bin/mkdir -p /home/containerd
W0722 02:46:09.776]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0722 02:46:09.777]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0722 02:46:09.777]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0722 02:46:09.777]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0722 02:46:09.777]       ExecStart=/home/containerd/configure.sh
W0722 02:46:09.777] 
W0722 02:46:09.778]       [Install]
W0722 02:46:09.778]       WantedBy=containerd.target
W0722 02:46:09.778] 
... skipping 74 lines ...
W0722 02:46:09.792] # fetch_metadata fetches metadata from GCE metadata server.
W0722 02:46:09.792] # Var set:
W0722 02:46:09.792] # 1. Metadata key: key of the metadata.
W0722 02:46:09.793] fetch_metadata() {
W0722 02:46:09.793]   local -r key=$1
W0722 02:46:09.793]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0722 02:46:09.793]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0722 02:46:09.794]     grep -q "^${key}$"; then
W0722 02:46:09.794]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0722 02:46:09.794]       "${attributes}/${key}"
W0722 02:46:09.794]   fi
W0722 02:46:09.795] }
W0722 02:46:09.795] 
W0722 02:46:09.795] # fetch_env fetches environment variables from GCE metadata server
W0722 02:46:09.796] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0722 02:46:09.809]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0722 02:46:09.810]     deploy_path="${deploy_path}/${deploy_dir}"
W0722 02:46:09.810]   fi
W0722 02:46:09.810] 
W0722 02:46:09.810]   # TODO(random-liu): Put version into the metadata instead of
W0722 02:46:09.810]   # deciding it in cloud init. This may cause issue to reboot test.
W0722 02:46:09.811]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0722 02:46:09.811]     https://storage.googleapis.com/${deploy_path}/latest)
W0722 02:46:09.811] fi
W0722 02:46:09.811] 
W0722 02:46:09.811] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0722 02:46:09.812] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0722 02:46:09.812] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 164 lines ...
W0722 02:50:15.430] I0722 02:50:15.429538    4384 utils.go:82] Configure iptables firewall rules on "tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1"
W0722 02:50:16.632] I0722 02:50:16.631584    4384 utils.go:117] Killing any existing node processes on "tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1"
W0722 02:50:17.108] I0722 02:50:17.107642    4384 utils.go:117] Killing any existing node processes on "tmp-node-e2e-dc64eef8-cos-stable-60-9592-84-0"
W0722 02:50:18.171] I0722 02:50:18.171079    4384 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0722 02:50:18.172] I0722 02:50:18.171126    4384 node_e2e.go:164] Starting tests on "tmp-node-e2e-dc64eef8-cos-stable-60-9592-84-0"
W0722 02:50:18.475] I0722 02:50:18.475294    4384 node_e2e.go:164] Starting tests on "tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1"
W0722 02:50:26.689] I0722 02:50:26.688582    4384 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0722 02:50:27.284] I0722 02:50:27.283794    4384 remote.go:202] Got the system logs from journald; copying it back...
W0722 02:50:28.137] I0722 02:50:28.136711    4384 remote.go:122] Copying test artifacts from "tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1"
W0722 02:50:29.421] I0722 02:50:29.421126    4384 run_remote.go:717] Deleting instance "tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1"
I0722 02:50:30.243] 
I0722 02:50:30.244] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0722 02:50:30.244] >                              START TEST                                >
... skipping 46 lines ...
I0722 02:50:30.249] I0722 02:50:22.534522    2853 validators.go:44] Validating package...
I0722 02:50:30.249] PASS
I0722 02:50:30.249] I0722 02:50:22.541860    2781 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0722 02:50:30.249] I0722 02:50:22.541906    2781 remote_runtime.go:43] Connecting to runtime service unix:///run/containerd/containerd.sock
I0722 02:50:30.249] I0722 02:50:22.542038    2781 remote_image.go:41] Connecting to image service unix:///run/containerd/containerd.sock
I0722 02:50:30.250] I0722 02:50:22.542064    2781 image_list.go:146] Pre-pulling images with CRI [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.6.2 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0722 02:50:30.250] E0722 02:50:22.542246    2781 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.250] E0722 02:50:22.542266    2781 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.251] W0722 02:50:22.542279    2781 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (1 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.251] E0722 02:50:23.542437    2781 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.251] E0722 02:50:23.542502    2781 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.251] W0722 02:50:23.542512    2781 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (2 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.252] E0722 02:50:24.542750    2781 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.252] E0722 02:50:24.542827    2781 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.252] W0722 02:50:24.542838    2781 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (3 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.252] E0722 02:50:25.543074    2781 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.252] E0722 02:50:25.543138    2781 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.253] W0722 02:50:25.543151    2781 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (4 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.253] E0722 02:50:26.543385    2781 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.253] E0722 02:50:26.543518    2781 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.253] W0722 02:50:26.543531    2781 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (5 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.253] W0722 02:50:26.543540    2781 image_list.go:163] Could not pre-pull image docker.io/library/busybox:1.29 rpc error: code = Unavailable desc = grpc: the connection is unavailable output: 
I0722 02:50:30.254] 
I0722 02:50:30.254] 
I0722 02:50:30.254] Failure [4.437 seconds]
I0722 02:50:30.254] [BeforeSuite] BeforeSuite 
I0722 02:50:30.254] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.254] 
I0722 02:50:30.254]   Expected error:
I0722 02:50:30.254]       <*status.statusError | 0xc420ff94d0>: {
I0722 02:50:30.254]           Code: 14,
I0722 02:50:30.254]           Message: "grpc: the connection is unavailable",
I0722 02:50:30.254]           Details: nil,
I0722 02:50:30.255]       }
I0722 02:50:30.255]       rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0722 02:50:30.255]   not to have occurred
I0722 02:50:30.255] 
I0722 02:50:30.255]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:152
I0722 02:50:30.255] ------------------------------
I0722 02:50:30.255] Failure [4.475 seconds]
I0722 02:50:30.255] [BeforeSuite] BeforeSuite 
I0722 02:50:30.255] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.255] 
I0722 02:50:30.255]   BeforeSuite on Node 1 failed
I0722 02:50:30.256] 
I0722 02:50:30.256]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.256] ------------------------------
I0722 02:50:30.256] Failure [4.473 seconds]
I0722 02:50:30.256] [BeforeSuite] BeforeSuite 
I0722 02:50:30.256] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.256] 
I0722 02:50:30.256]   BeforeSuite on Node 1 failed
I0722 02:50:30.256] 
I0722 02:50:30.256]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.257] ------------------------------
I0722 02:50:30.257] Failure [4.430 seconds]
I0722 02:50:30.257] [BeforeSuite] BeforeSuite 
I0722 02:50:30.257] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.257] 
I0722 02:50:30.257]   BeforeSuite on Node 1 failed
I0722 02:50:30.257] 
I0722 02:50:30.257]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.257] ------------------------------
I0722 02:50:30.257] Failure [4.481 seconds]
I0722 02:50:30.258] [BeforeSuite] BeforeSuite 
I0722 02:50:30.258] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.258] 
I0722 02:50:30.258]   BeforeSuite on Node 1 failed
I0722 02:50:30.258] 
I0722 02:50:30.258]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.258] ------------------------------
I0722 02:50:30.258] Failure [4.524 seconds]
I0722 02:50:30.258] [BeforeSuite] BeforeSuite 
I0722 02:50:30.258] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.259] 
I0722 02:50:30.259]   BeforeSuite on Node 1 failed
I0722 02:50:30.259] 
I0722 02:50:30.259]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.259] ------------------------------
I0722 02:50:30.259] Failure [4.527 seconds]
I0722 02:50:30.259] [BeforeSuite] BeforeSuite 
I0722 02:50:30.259] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.259] 
I0722 02:50:30.259]   BeforeSuite on Node 1 failed
I0722 02:50:30.259] 
I0722 02:50:30.260]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.260] ------------------------------
I0722 02:50:30.260] Failure [4.468 seconds]
I0722 02:50:30.260] [BeforeSuite] BeforeSuite 
I0722 02:50:30.260] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.260] 
I0722 02:50:30.260]   BeforeSuite on Node 1 failed
I0722 02:50:30.260] 
I0722 02:50:30.260]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0722 02:50:30.260] ------------------------------
I0722 02:50:30.261] I0722 02:50:26.650556    2781 e2e_node_suite_test.go:191] Tests Finished
I0722 02:50:30.261] 
I0722 02:50:30.261] 
I0722 02:50:30.261] Ran 2208 of 0 Specs in 4.588 seconds
I0722 02:50:30.261] FAIL! -- 0 Passed | 2208 Failed | 0 Flaked | 0 Pending | 0 Skipped 
I0722 02:50:30.261] 
I0722 02:50:30.261] Ginkgo ran 1 suite in 7.541725667s
I0722 02:50:30.261] Test Suite Failed
I0722 02:50:30.261] 
I0722 02:50:30.261] Failure Finished Test Suite on Host tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1
I0722 02:50:30.263] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.233.139.17 -- sudo sh -c 'cd /tmp/node-e2e-20190722T025005 && timeout -k 30s 3900.000000s ./ginkgo --nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1 --report-dir=/tmp/node-e2e-20190722T025005/results --report-prefix=ubuntu --image-description="ubuntu-gke-1604-xenial-v20170420-1" --kubelet-flags=--experimental-kernel-memcg-notification=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"'] failed with error: exit status 1, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.233.139.17:/tmp/node-e2e-20190722T025005/results/*.log /workspace/_artifacts/tmp-node-e2e-dc64eef8-ubuntu-gke-1604-xenial-v20170420-1] failed with error: exit status 1]
I0722 02:50:30.263] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0722 02:50:30.263] <                              FINISH TEST                               <
I0722 02:50:30.263] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0722 02:50:30.263] 
W0722 03:24:48.895] I0722 03:24:48.894722    4384 remote.go:122] Copying test artifacts from "tmp-node-e2e-dc64eef8-cos-stable-60-9592-84-0"
W0722 03:24:53.638] I0722 03:24:53.638126    4384 run_remote.go:717] Deleting instance "tmp-node-e2e-dc64eef8-cos-stable-60-9592-84-0"
... skipping 379 lines ...
I0722 03:24:54.290]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
I0722 03:24:54.290] STEP: Creating a kubernetes client
I0722 03:24:54.290] STEP: Building a namespace api object, basename init-container
I0722 03:24:54.290] Jul 22 02:52:01.288: INFO: Skipping waiting for service account
I0722 03:24:54.290] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0722 03:24:54.290]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0722 03:24:54.290] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0722 03:24:54.291]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0722 03:24:54.291] STEP: creating the pod
I0722 03:24:54.291] Jul 22 02:52:01.288: INFO: PodSpec: initContainers in spec.initContainers
I0722 03:24:54.291] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0722 03:24:54.291]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0722 03:24:54.291] Jul 22 02:52:03.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
I0722 03:24:54.292] Jul 22 02:52:09.282: INFO: namespace e2e-tests-init-container-qwr7s deletion completed in 6.091294299s
I0722 03:24:54.292] 
I0722 03:24:54.292] 
I0722 03:24:54.292] • [SLOW TEST:8.031 seconds]
I0722 03:24:54.292] [k8s.io] InitContainer [NodeConformance]
I0722 03:24:54.292] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0722 03:24:54.292]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0722 03:24:54.292]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0722 03:24:54.293] ------------------------------
I0722 03:24:54.293] SS
I0722 03:24:54.293] ------------------------------
I0722 03:24:54.293] [BeforeEach] [sig-storage] Projected
I0722 03:24:54.293]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
... skipping 1099 lines ...
I0722 03:24:54.427] STEP: Building a namespace api object, basename kubelet-test
I0722 03:24:54.428] Jul 22 02:53:35.461: INFO: Skipping waiting for service account
I0722 03:24:54.428] [BeforeEach] [k8s.io] Kubelet
I0722 03:24:54.428]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:37
I0722 03:24:54.428] [BeforeEach] when scheduling a busybox command that always fails in a pod
I0722 03:24:54.428]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:80
I0722 03:24:54.428] [It] should have an error terminated reason [NodeConformance]
I0722 03:24:54.428]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:100
I0722 03:24:54.428] [AfterEach] [k8s.io] Kubelet
I0722 03:24:54.428]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0722 03:24:54.429] Jul 22 02:53:39.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0722 03:24:54.429] STEP: Destroying namespace "e2e-tests-kubelet-test-bbkhs" for this suite.
I0722 03:24:54.429] Jul 22 02:53:49.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 3 lines ...
I0722 03:24:54.430] 
I0722 03:24:54.430] • [SLOW TEST:14.143 seconds]
I0722 03:24:54.430] [k8s.io] Kubelet
I0722 03:24:54.430] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0722 03:24:54.430]   when scheduling a busybox command that always fails in a pod
I0722 03:24:54.430]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:77
I0722 03:24:54.430]     should have an error terminated reason [NodeConformance]
I0722 03:24:54.430]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:100
I0722 03:24:54.430] ------------------------------
I0722 03:24:54.431] [BeforeEach] [sig-storage] ConfigMap
I0722 03:24:54.431]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
I0722 03:24:54.431] STEP: Creating a kubernetes client
I0722 03:24:54.431] STEP: Building a namespace api object, basename configmap
... skipping 1408 lines ...
I0722 03:24:54.611]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
I0722 03:24:54.611] STEP: Creating a kubernetes client
I0722 03:24:54.611] STEP: Building a namespace api object, basename init-container
I0722 03:24:54.611] Jul 22 02:54:40.555: INFO: Skipping waiting for service account
I0722 03:24:54.611] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0722 03:24:54.611]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0722 03:24:54.611] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0722 03:24:54.611]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0722 03:24:54.612] STEP: creating the pod
I0722 03:24:54.612] Jul 22 02:54:40.555: INFO: PodSpec: initContainers in spec.initContainers
I0722 03:24:54.617] Jul 22 02:55:26.149: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0cd868f2-ac2c-11e9-b190-42010a8a003c", GenerateName:"", Namespace:"e2e-tests-init-container-r5lxn", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-r5lxn/pods/pod-init-0cd868f2-ac2c-11e9-b190-42010a8a003c", UID:"0ce32812-ac2c-11e9-b1a6-42010a8a003c", ResourceVersion:"1794", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63699360880, loc:(*time.Location)(0x8856420)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"555855415"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc421d02510), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-dc64eef8-cos-stable-60-9592-84-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc421cf80c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc421d02580)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc421d025a0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc421d025b0), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699360880, loc:(*time.Location)(0x8856420)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699360880, loc:(*time.Location)(0x8856420)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699360880, loc:(*time.Location)(0x8856420)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699360880, loc:(*time.Location)(0x8856420)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.60", PodIP:"10.100.0.109", StartTime:(*v1.Time)(0xc420fe6760), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc421cfa1c0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc421cfa230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"containerd://8aa5c77db286ce69f59275fc236e1e24d0e15daa21e02903ac6b6c95cc757401"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc420fe67c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc420fe6800), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0722 03:24:54.617] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0722 03:24:54.618]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0722 03:24:54.618] Jul 22 02:55:26.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0722 03:24:54.618] STEP: Destroying namespace "e2e-tests-init-container-r5lxn" for this suite.
I0722 03:24:54.618] Jul 22 02:55:50.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0722 03:24:54.618] Jul 22 02:55:50.183: INFO: namespace: e2e-tests-init-container-r5lxn, resource: bindings, ignored listing per whitelist
I0722 03:24:54.618] Jul 22 02:55:50.196: INFO: namespace e2e-tests-init-container-r5lxn deletion completed in 24.040952898s
I0722 03:24:54.618] 
I0722 03:24:54.618] 
I0722 03:24:54.619] • [SLOW TEST:69.707 seconds]
I0722 03:24:54.619] [k8s.io] InitContainer [NodeConformance]
I0722 03:24:54.619] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0722 03:24:54.619]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0722 03:24:54.619]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0722 03:24:54.619] ------------------------------
I0722 03:24:54.619] [BeforeEach] [k8s.io] Container Runtime
I0722 03:24:54.619]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
I0722 03:24:54.619] STEP: Creating a kubernetes client
I0722 03:24:54.620] STEP: Building a namespace api object, basename container-runtime
... skipping 1499 lines ...
I0722 03:24:54.799] [BeforeEach] [k8s.io] Security Context
I0722 03:24:54.799]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0722 03:24:54.799] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [NodeConformance]
I0722 03:24:54.799]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:135
I0722 03:24:54.799] Jul 22 02:58:32.936: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-9759f268-ac2c-11e9-8e87-42010a8a003c" in namespace "e2e-tests-security-context-test-nnxwl" to be "success or failure"
I0722 03:24:54.799] Jul 22 02:58:32.937: INFO: Pod "busybox-readonly-true-9759f268-ac2c-11e9-8e87-42010a8a003c": Phase="Pending", Reason="", readiness=false. Elapsed: 1.116739ms
I0722 03:24:54.800] Jul 22 02:58:34.939: INFO: Pod "busybox-readonly-true-9759f268-ac2c-11e9-8e87-42010a8a003c": Phase="Failed", Reason="", readiness=false. Elapsed: 2.003095919s
I0722 03:24:54.800] Jul 22 02:58:34.939: INFO: Pod "busybox-readonly-true-9759f268-ac2c-11e9-8e87-42010a8a003c" satisfied condition "success or failure"
I0722 03:24:54.800] [AfterEach] [k8s.io] Security Context
I0722 03:24:54.800]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0722 03:24:54.800] Jul 22 02:58:34.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0722 03:24:54.800] STEP: Destroying namespace "e2e-tests-security-context-test-nnxwl" for this suite.
I0722 03:24:54.800] Jul 22 02:58:40.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 839 lines ...
I0722 03:24:54.906] STEP: submitting the pod to kubernetes
I0722 03:24:54.906] STEP: verifying the pod is in kubernetes
I0722 03:24:54.906] STEP: updating the pod
I0722 03:24:54.906] Jul 22 03:00:36.104: INFO: Successfully updated pod "pod-update-activedeadlineseconds-df3cdd53-ac2c-11e9-b190-42010a8a003c"
I0722 03:24:54.906] Jul 22 03:00:36.104: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-df3cdd53-ac2c-11e9-b190-42010a8a003c" in namespace "e2e-tests-pods-pg7zm" to be "terminated due to deadline exceeded"
I0722 03:24:54.906] Jul 22 03:00:36.105: INFO: Pod "pod-update-activedeadlineseconds-df3cdd53-ac2c-11e9-b190-42010a8a003c": Phase="Running", Reason="", readiness=true. Elapsed: 1.322424ms
I0722 03:24:54.907] Jul 22 03:00:38.108: INFO: Pod "pod-update-activedeadlineseconds-df3cdd53-ac2c-11e9-b190-42010a8a003c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.00380461s
I0722 03:24:54.907] Jul 22 03:00:38.108: INFO: Pod "pod-update-activedeadlineseconds-df3cdd53-ac2c-11e9-b190-42010a8a003c" satisfied condition "terminated due to deadline exceeded"
I0722 03:24:54.907] [AfterEach] [k8s.io] Pods
I0722 03:24:54.907]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0722 03:24:54.907] Jul 22 03:00:38.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0722 03:24:54.907] STEP: Destroying namespace "e2e-tests-pods-pg7zm" for this suite.
I0722 03:24:54.907] Jul 22 03:00:44.124: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 98 lines ...
I0722 03:24:54.919]   should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
I0722 03:24:54.919]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:649
I0722 03:24:54.919] ------------------------------
I0722 03:24:54.919] I0722 03:24:46.798999    1247 e2e_node_suite_test.go:186] Stopping node services...
I0722 03:24:54.919] I0722 03:24:46.799034    1247 server.go:258] Kill server "services"
I0722 03:24:54.920] I0722 03:24:46.799042    1247 server.go:295] Killing process 1389 (services) with -TERM
I0722 03:24:54.920] E0722 03:24:46.901584    1247 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0722 03:24:54.920] I0722 03:24:46.901604    1247 server.go:258] Kill server "kubelet"
I0722 03:24:54.920] I0722 03:24:46.977860    1247 services.go:145] Fetching log files...
I0722 03:24:54.920] I0722 03:24:46.977934    1247 services.go:154] Get log file "kern.log" with journalctl command [-k].
I0722 03:24:54.920] I0722 03:24:47.024253    1247 services.go:154] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0722 03:24:54.920] I0722 03:24:47.035357    1247 services.go:154] Get log file "docker.log" with journalctl command [-u docker].
I0722 03:24:54.920] I0722 03:24:47.053206    1247 services.go:154] Get log file "containerd.log" with journalctl command [-u containerd].
I0722 03:24:54.921] I0722 03:24:47.584880    1247 services.go:154] Get log file "kubelet.log" with journalctl command [-u kubelet-20190722T025005.service].
I0722 03:24:54.921] I0722 03:24:48.759876    1247 e2e_node_suite_test.go:191] Tests Finished
I0722 03:24:54.921] 
I0722 03:24:54.921] 
I0722 03:24:54.921] Ran 157 of 276 Specs in 2065.691 seconds
I0722 03:24:54.921] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 119 Skipped 
I0722 03:24:54.921] 
I0722 03:24:54.921] Ginkgo ran 1 suite in 34m30.033748776s
I0722 03:24:54.921] Test Suite Passed
I0722 03:24:54.921] 
I0722 03:24:54.922] Success Finished Test Suite on Host tmp-node-e2e-dc64eef8-cos-stable-60-9592-84-0
I0722 03:24:54.922] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0722 03:24:55.024] 2019/07/22 03:24:55 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml' finished in 39m19.374895019s
W0722 03:24:55.024] 2019/07/22 03:24:55 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0722 03:24:55.024] 2019/07/22 03:24:55 node.go:52: Noop - Node Down()
W0722 03:24:55.024] 2019/07/22 03:24:55 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0722 03:24:55.024] 2019/07/22 03:24:55 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0722 03:24:55.244] 2019/07/22 03:24:55 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 233.94379ms
W0722 03:24:55.245] 2019/07/22 03:24:55 main.go:316: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1]
W0722 03:24:55.252] Traceback (most recent call last):
W0722 03:24:55.252]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0722 03:24:55.252]     main(parse_args())
W0722 03:24:55.253]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0722 03:24:55.253]     mode.start(runner_args)
W0722 03:24:55.253]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0722 03:24:55.253]     check_env(env, self.command, *args)
W0722 03:24:55.254]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0722 03:24:55.254]     subprocess.check_call(cmd, env=env)
W0722 03:24:55.254]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0722 03:24:55.254]     raise CalledProcessError(retcode, cmd)
W0722 03:24:55.255] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml', '--gcp-project=cri-containerd-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd\\"]}"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m')' returned non-zero exit status 1
E0722 03:24:55.259] Command failed
I0722 03:24:55.259] process 326 exited with code 1 after 39.4m
E0722 03:24:55.259] FAIL: ci-containerd-node-e2e-1-2
I0722 03:24:55.260] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0722 03:24:55.800] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0722 03:24:55.858] process 25331 exited with code 0 after 0.0m
I0722 03:24:55.858] Call:  gcloud config get-value account
I0722 03:24:56.197] process 25343 exited with code 0 after 0.0m
I0722 03:24:56.197] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0722 03:24:56.198] Upload result and artifacts...
I0722 03:24:56.198] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1153133643833544704
I0722 03:24:56.199] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1153133643833544704/artifacts
W0722 03:24:57.285] CommandException: One or more URLs matched no objects.
E0722 03:24:57.424] Command failed
I0722 03:24:57.424] process 25355 exited with code 1 after 0.0m
W0722 03:24:57.424] Remote dir gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1153133643833544704/artifacts not exist yet
I0722 03:24:57.425] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1153133643833544704/artifacts
I0722 03:25:00.219] process 25497 exited with code 0 after 0.0m
I0722 03:25:00.220] Call:  git rev-parse HEAD
I0722 03:25:00.225] process 26072 exited with code 0 after 0.0m
... skipping 13 lines ...