This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 164 succeeded
Started2019-07-20 18:12
Elapsed36m46s
Revision
Buildergke-prow-ssd-pool-1a225945-86t9
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/dfe7de22-e350-433e-a4ec-f9693e413fde/targets/test'}}
pode0c7ba76-ab19-11e9-b82b-365474bd0c86
resultstorehttps://source.cloud.google.com/results/invocations/dfe7de22-e350-433e-a4ec-f9693e413fde/targets/test
infra-commita7f2c5488
job-versionv1.12.11-beta.0.1+5f799a487b70ae
pode0c7ba76-ab19-11e9-b82b-365474bd0c86
repok8s.io/kubernetes
repo-commit5f799a487b70aea5e298e5f5f1e3bac904b54ef6
repos{u'k8s.io/kubernetes': u'release-1.12', u'github.com/containerd/cri': u'release/1.2'}
revisionv1.12.11-beta.0.1+5f799a487b70ae

Test Failures


Node Tests 35m31s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 164 Passed Tests

Show 119 Skipped Tests

Error lines from build-log.txt

... skipping 169 lines ...
W0720 18:14:11.852]       [Service]
W0720 18:14:11.852]       Type=oneshot
W0720 18:14:11.852]       RemainAfterExit=yes
W0720 18:14:11.852]       ExecStartPre=/bin/mkdir -p /home/containerd
W0720 18:14:11.852]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0720 18:14:11.852]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0720 18:14:11.853]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0720 18:14:11.853]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0720 18:14:11.853]       ExecStart=/home/containerd/configure.sh
W0720 18:14:11.853] 
W0720 18:14:11.853]       [Install]
W0720 18:14:11.853]       WantedBy=containerd.target
W0720 18:14:11.853] 
... skipping 74 lines ...
W0720 18:14:11.867] # fetch_metadata fetches metadata from GCE metadata server.
W0720 18:14:11.868] # Var set:
W0720 18:14:11.868] # 1. Metadata key: key of the metadata.
W0720 18:14:11.868] fetch_metadata() {
W0720 18:14:11.868]   local -r key=$1
W0720 18:14:11.868]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0720 18:14:11.868]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0720 18:14:11.869]     grep -q "^${key}$"; then
W0720 18:14:11.869]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0720 18:14:11.869]       "${attributes}/${key}"
W0720 18:14:11.869]   fi
W0720 18:14:11.869] }
W0720 18:14:11.869] 
W0720 18:14:11.869] # fetch_env fetches environment variables from GCE metadata server
W0720 18:14:11.870] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0720 18:14:11.879]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0720 18:14:11.879]     deploy_path="${deploy_path}/${deploy_dir}"
W0720 18:14:11.879]   fi
W0720 18:14:11.879] 
W0720 18:14:11.879]   # TODO(random-liu): Put version into the metadata instead of
W0720 18:14:11.879]   # deciding it in cloud init. This may cause issue to reboot test.
W0720 18:14:11.880]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0720 18:14:11.880]     https://storage.googleapis.com/${deploy_path}/latest)
W0720 18:14:11.880] fi
W0720 18:14:11.880] 
W0720 18:14:11.880] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0720 18:14:11.880] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0720 18:14:11.880] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 155 lines ...
W0720 18:14:11.902] # fetch_metadata fetches metadata from GCE metadata server.
W0720 18:14:11.902] # Var set:
W0720 18:14:11.902] # 1. Metadata key: key of the metadata.
W0720 18:14:11.903] fetch_metadata() {
W0720 18:14:11.903]   local -r key=$1
W0720 18:14:11.903]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0720 18:14:11.903]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0720 18:14:11.903]     grep -q "^${key}$"; then
W0720 18:14:11.903]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0720 18:14:11.903]       "${attributes}/${key}"
W0720 18:14:11.904]   fi
W0720 18:14:11.904] }
W0720 18:14:11.904] 
W0720 18:14:11.904] # fetch_env fetches environment variables from GCE metadata server
W0720 18:14:11.904] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0720 18:14:11.912]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0720 18:14:11.912]     deploy_path="${deploy_path}/${deploy_dir}"
W0720 18:14:11.912]   fi
W0720 18:14:11.913] 
W0720 18:14:11.913]   # TODO(random-liu): Put version into the metadata instead of
W0720 18:14:11.913]   # deciding it in cloud init. This may cause issue to reboot test.
W0720 18:14:11.913]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0720 18:14:11.913]     https://storage.googleapis.com/${deploy_path}/latest)
W0720 18:14:11.913] fi
W0720 18:14:11.913] 
W0720 18:14:11.914] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0720 18:14:11.914] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0720 18:14:11.914] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 103 lines ...
W0720 18:14:11.928]       [Service]
W0720 18:14:11.928]       Type=oneshot
W0720 18:14:11.928]       RemainAfterExit=yes
W0720 18:14:11.928]       ExecStartPre=/bin/mkdir -p /home/containerd
W0720 18:14:11.928]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0720 18:14:11.928]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0720 18:14:11.928]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0720 18:14:11.929]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0720 18:14:11.929]       ExecStart=/home/containerd/configure.sh
W0720 18:14:11.929] 
W0720 18:14:11.929]       [Install]
W0720 18:14:11.929]       WantedBy=containerd.target
W0720 18:14:11.929] 
... skipping 82 lines ...
W0720 18:18:30.320] I0720 18:18:30.320187    4401 node_e2e.go:164] Starting tests on "tmp-node-e2e-2cac7f5d-ubuntu-gke-1604-xenial-v20170420-1"
W0720 18:18:30.621] I0720 18:18:30.620426    4401 remote.go:97] Extracting tar on "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:18:34.863] I0720 18:18:34.862349    4401 remote.go:112] Running test on "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:18:34.863] I0720 18:18:34.862397    4401 utils.go:55] Install CNI on "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:18:36.062] I0720 18:18:36.061495    4401 utils.go:68] Adding CNI configuration on "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:18:36.640] I0720 18:18:36.639919    4401 utils.go:82] Configure iptables firewall rules on "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:18:38.730] I0720 18:18:38.730496    4401 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0720 18:18:38.763] I0720 18:18:38.762751    4401 utils.go:117] Killing any existing node processes on "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:18:39.362] I0720 18:18:39.362039    4401 remote.go:202] Got the system logs from journald; copying it back...
W0720 18:18:39.838] I0720 18:18:39.838068    4401 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0720 18:18:39.839] I0720 18:18:39.838106    4401 node_e2e.go:164] Starting tests on "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:18:40.297] I0720 18:18:40.296707    4401 remote.go:122] Copying test artifacts from "tmp-node-e2e-2cac7f5d-ubuntu-gke-1604-xenial-v20170420-1"
W0720 18:18:41.463] I0720 18:18:41.462562    4401 run_remote.go:717] Deleting instance "tmp-node-e2e-2cac7f5d-ubuntu-gke-1604-xenial-v20170420-1"
... skipping 49 lines ...
I0720 18:18:42.246] I0720 18:18:34.460204    2874 validators.go:44] Validating package...
I0720 18:18:42.246] PASS
I0720 18:18:42.247] I0720 18:18:34.462790    2810 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0720 18:18:42.247] I0720 18:18:34.462844    2810 remote_runtime.go:43] Connecting to runtime service unix:///run/containerd/containerd.sock
I0720 18:18:42.247] I0720 18:18:34.462988    2810 remote_image.go:41] Connecting to image service unix:///run/containerd/containerd.sock
I0720 18:18:42.248] I0720 18:18:34.463017    2810 image_list.go:146] Pre-pulling images with CRI [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.6.2 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0720 18:18:42.248] E0720 18:18:34.463205    2810 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.249] E0720 18:18:34.463225    2810 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.249] W0720 18:18:34.463238    2810 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (1 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.249] E0720 18:18:35.463490    2810 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.249] E0720 18:18:35.574803    2810 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.250] W0720 18:18:35.574833    2810 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (2 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.250] E0720 18:18:36.575076    2810 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.250] E0720 18:18:36.575136    2810 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.251] W0720 18:18:36.575148    2810 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (3 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.251] E0720 18:18:37.575361    2810 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.251] E0720 18:18:37.575420    2810 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.251] W0720 18:18:37.575433    2810 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (4 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.252] E0720 18:18:38.575684    2810 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.252] E0720 18:18:38.575737    2810 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.252] W0720 18:18:38.575747    2810 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (5 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.252] W0720 18:18:38.575756    2810 image_list.go:163] Could not pre-pull image docker.io/library/busybox:1.29 rpc error: code = Unavailable desc = grpc: the connection is unavailable output: 
I0720 18:18:42.253] 
I0720 18:18:42.253] 
I0720 18:18:42.253] Failure [4.880 seconds]
I0720 18:18:42.253] [BeforeSuite] BeforeSuite 
I0720 18:18:42.253] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.253] 
I0720 18:18:42.253]   Expected error:
I0720 18:18:42.254]       <*status.statusError | 0xc42124e900>: {
I0720 18:18:42.254]           Code: 14,
I0720 18:18:42.254]           Message: "grpc: the connection is unavailable",
I0720 18:18:42.254]           Details: nil,
I0720 18:18:42.254]       }
I0720 18:18:42.254]       rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0720 18:18:42.254]   not to have occurred
I0720 18:18:42.254] 
I0720 18:18:42.255]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:152
I0720 18:18:42.255] ------------------------------
I0720 18:18:42.255] Failure [4.783 seconds]
I0720 18:18:42.255] [BeforeSuite] BeforeSuite 
I0720 18:18:42.255] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.255] 
I0720 18:18:42.256]   BeforeSuite on Node 1 failed
I0720 18:18:42.256] 
I0720 18:18:42.256]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.256] ------------------------------
I0720 18:18:42.256] Failure [4.777 seconds]
I0720 18:18:42.256] [BeforeSuite] BeforeSuite 
I0720 18:18:42.256] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.256] 
I0720 18:18:42.256]   BeforeSuite on Node 1 failed
I0720 18:18:42.257] 
I0720 18:18:42.257]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.257] ------------------------------
I0720 18:18:42.257] Failure [4.825 seconds]
I0720 18:18:42.257] [BeforeSuite] BeforeSuite 
I0720 18:18:42.257] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.257] 
I0720 18:18:42.257]   BeforeSuite on Node 1 failed
I0720 18:18:42.257] 
I0720 18:18:42.258]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.258] ------------------------------
I0720 18:18:42.258] Failure [4.779 seconds]
I0720 18:18:42.258] [BeforeSuite] BeforeSuite 
I0720 18:18:42.258] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.258] 
I0720 18:18:42.258]   BeforeSuite on Node 1 failed
I0720 18:18:42.259] 
I0720 18:18:42.259]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.259] ------------------------------
I0720 18:18:42.259] Failure [4.902 seconds]
I0720 18:18:42.259] [BeforeSuite] BeforeSuite 
I0720 18:18:42.259] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.260] 
I0720 18:18:42.260]   BeforeSuite on Node 1 failed
I0720 18:18:42.260] 
I0720 18:18:42.260]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.260] ------------------------------
I0720 18:18:42.260] Failure [4.785 seconds]
I0720 18:18:42.260] [BeforeSuite] BeforeSuite 
I0720 18:18:42.261] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.261] 
I0720 18:18:42.261]   BeforeSuite on Node 1 failed
I0720 18:18:42.261] 
I0720 18:18:42.261]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.261] ------------------------------
I0720 18:18:42.261] Failure [4.825 seconds]
I0720 18:18:42.262] [BeforeSuite] BeforeSuite 
I0720 18:18:42.262] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.262] 
I0720 18:18:42.262]   BeforeSuite on Node 1 failed
I0720 18:18:42.262] 
I0720 18:18:42.262]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0720 18:18:42.263] ------------------------------
I0720 18:18:42.263] I0720 18:18:38.681000    2810 e2e_node_suite_test.go:191] Tests Finished
I0720 18:18:42.263] 
I0720 18:18:42.263] 
I0720 18:18:42.263] Ran 2208 of 0 Specs in 4.988 seconds
I0720 18:18:42.263] FAIL! -- 0 Passed | 2208 Failed | 0 Flaked | 0 Pending | 0 Skipped 
I0720 18:18:42.263] 
I0720 18:18:42.264] Ginkgo ran 1 suite in 7.82254913s
I0720 18:18:42.264] Test Suite Failed
I0720 18:18:42.264] 
I0720 18:18:42.264] Failure Finished Test Suite on Host tmp-node-e2e-2cac7f5d-ubuntu-gke-1604-xenial-v20170420-1
I0720 18:18:42.265] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.230.29.199 -- sudo sh -c 'cd /tmp/node-e2e-20190720T181818 && timeout -k 30s 3900.000000s ./ginkgo --nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-2cac7f5d-ubuntu-gke-1604-xenial-v20170420-1 --report-dir=/tmp/node-e2e-20190720T181818/results --report-prefix=ubuntu --image-description="ubuntu-gke-1604-xenial-v20170420-1" --kubelet-flags=--experimental-kernel-memcg-notification=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"'] failed with error: exit status 1, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.230.29.199:/tmp/node-e2e-20190720T181818/results/*.log /workspace/_artifacts/tmp-node-e2e-2cac7f5d-ubuntu-gke-1604-xenial-v20170420-1] failed with error: exit status 1]
I0720 18:18:42.265] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0720 18:18:42.266] <                              FINISH TEST                               <
I0720 18:18:42.266] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0720 18:18:42.266] 
W0720 18:49:03.822] I0720 18:49:03.821282    4401 remote.go:122] Copying test artifacts from "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
W0720 18:49:08.901] I0720 18:49:08.901085    4401 run_remote.go:717] Deleting instance "tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0"
... skipping 1277 lines ...
I0720 18:49:09.947] STEP: verifying the pod is in kubernetes
I0720 18:49:09.947] STEP: updating the pod
I0720 18:49:09.948] Jul 20 18:21:42.621: INFO: Successfully updated pod "pod-update-activedeadlineseconds-37da4d84-ab1b-11e9-b36e-42010a8a0041"
I0720 18:49:09.948] Jul 20 18:21:42.621: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-37da4d84-ab1b-11e9-b36e-42010a8a0041" in namespace "e2e-tests-pods-tp22t" to be "terminated due to deadline exceeded"
I0720 18:49:09.948] Jul 20 18:21:42.623: INFO: Pod "pod-update-activedeadlineseconds-37da4d84-ab1b-11e9-b36e-42010a8a0041": Phase="Running", Reason="", readiness=true. Elapsed: 1.759931ms
I0720 18:49:09.948] Jul 20 18:21:44.625: INFO: Pod "pod-update-activedeadlineseconds-37da4d84-ab1b-11e9-b36e-42010a8a0041": Phase="Running", Reason="", readiness=true. Elapsed: 2.003818619s
I0720 18:49:09.948] Jul 20 18:21:46.627: INFO: Pod "pod-update-activedeadlineseconds-37da4d84-ab1b-11e9-b36e-42010a8a0041": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.005575711s
I0720 18:49:09.949] Jul 20 18:21:46.627: INFO: Pod "pod-update-activedeadlineseconds-37da4d84-ab1b-11e9-b36e-42010a8a0041" satisfied condition "terminated due to deadline exceeded"
I0720 18:49:09.949] [AfterEach] [k8s.io] Pods
I0720 18:49:09.949]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0720 18:49:09.949] Jul 20 18:21:46.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0720 18:49:09.949] STEP: Destroying namespace "e2e-tests-pods-tp22t" for this suite.
I0720 18:49:09.949] Jul 20 18:21:52.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 176 lines ...
I0720 18:49:09.971] [BeforeEach] [k8s.io] Security Context
I0720 18:49:09.972]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:35
I0720 18:49:09.972] [It] should run the container with readonly rootfs when readOnlyRootFilesystem=true [NodeConformance]
I0720 18:49:09.972]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:135
I0720 18:49:09.972] Jul 20 18:22:00.818: INFO: Waiting up to 5m0s for pod "busybox-readonly-true-442a989d-ab1b-11e9-b36e-42010a8a0041" in namespace "e2e-tests-security-context-test-z2fb8" to be "success or failure"
I0720 18:49:09.972] Jul 20 18:22:00.819: INFO: Pod "busybox-readonly-true-442a989d-ab1b-11e9-b36e-42010a8a0041": Phase="Pending", Reason="", readiness=false. Elapsed: 1.153196ms
I0720 18:49:09.973] Jul 20 18:22:02.821: INFO: Pod "busybox-readonly-true-442a989d-ab1b-11e9-b36e-42010a8a0041": Phase="Failed", Reason="", readiness=false. Elapsed: 2.003111963s
I0720 18:49:09.973] Jul 20 18:22:02.821: INFO: Pod "busybox-readonly-true-442a989d-ab1b-11e9-b36e-42010a8a0041" satisfied condition "success or failure"
I0720 18:49:09.973] [AfterEach] [k8s.io] Security Context
I0720 18:49:09.973]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0720 18:49:09.973] Jul 20 18:22:02.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0720 18:49:09.973] STEP: Destroying namespace "e2e-tests-security-context-test-z2fb8" for this suite.
I0720 18:49:09.973] Jul 20 18:22:08.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 1412 lines ...
I0720 18:49:10.152]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
I0720 18:49:10.152] STEP: Creating a kubernetes client
I0720 18:49:10.152] STEP: Building a namespace api object, basename init-container
I0720 18:49:10.153] Jul 20 18:23:34.023: INFO: Skipping waiting for service account
I0720 18:49:10.153] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0720 18:49:10.153]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0720 18:49:10.153] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0720 18:49:10.153]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0720 18:49:10.153] STEP: creating the pod
I0720 18:49:10.153] Jul 20 18:23:34.023: INFO: PodSpec: initContainers in spec.initContainers
I0720 18:49:10.159] Jul 20 18:24:24.671: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7bc12712-ab1b-11e9-8af8-42010a8a0041", GenerateName:"", Namespace:"e2e-tests-init-container-t9bb5", SelfLink:"/api/v1/namespaces/e2e-tests-init-container-t9bb5/pods/pod-init-7bc12712-ab1b-11e9-8af8-42010a8a0041", UID:"7bc17b78-ab1b-11e9-8aeb-42010a8a0041", ResourceVersion:"1852", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63699243814, loc:(*time.Location)(0x8856420)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"23352163"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Never", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc4210d0ba0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"Default", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc42196fd40), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc4210d0c10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc4210d0c30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc4210d0c40), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699243814, loc:(*time.Location)(0x8856420)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699243814, loc:(*time.Location)(0x8856420)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699243814, loc:(*time.Location)(0x8856420)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699243814, loc:(*time.Location)(0x8856420)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.138.0.65", PodIP:"10.100.0.121", StartTime:(*v1.Time)(0xc4216a78a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc420e2ebd0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc420e2ec40)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9", ContainerID:"containerd://091610c5cee11cb89bafb640576c22c5c64005977c35f59bd2adbbbd96981166"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc4216a7900), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc4216a7940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
I0720 18:49:10.160] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0720 18:49:10.160]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0720 18:49:10.160] Jul 20 18:24:24.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0720 18:49:10.160] STEP: Destroying namespace "e2e-tests-init-container-t9bb5" for this suite.
I0720 18:49:10.160] Jul 20 18:24:46.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
I0720 18:49:10.160] Jul 20 18:24:46.709: INFO: namespace: e2e-tests-init-container-t9bb5, resource: bindings, ignored listing per whitelist
I0720 18:49:10.161] Jul 20 18:24:46.714: INFO: namespace e2e-tests-init-container-t9bb5 deletion completed in 22.04126172s
I0720 18:49:10.161] 
I0720 18:49:10.161] 
I0720 18:49:10.161] • [SLOW TEST:72.711 seconds]
I0720 18:49:10.161] [k8s.io] InitContainer [NodeConformance]
I0720 18:49:10.161] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0720 18:49:10.161]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0720 18:49:10.161]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0720 18:49:10.162] ------------------------------
I0720 18:49:10.162] S
I0720 18:49:10.162] ------------------------------
I0720 18:49:10.162] [BeforeEach] [sig-storage] Projected
I0720 18:49:10.162]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
... skipping 519 lines ...
I0720 18:49:10.226]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
I0720 18:49:10.226] STEP: Creating a kubernetes client
I0720 18:49:10.227] STEP: Building a namespace api object, basename init-container
I0720 18:49:10.227] Jul 20 18:25:44.717: INFO: Skipping waiting for service account
I0720 18:49:10.227] [BeforeEach] [k8s.io] InitContainer [NodeConformance]
I0720 18:49:10.227]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:43
I0720 18:49:10.227] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0720 18:49:10.227]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0720 18:49:10.227] STEP: creating the pod
I0720 18:49:10.227] Jul 20 18:25:44.717: INFO: PodSpec: initContainers in spec.initContainers
I0720 18:49:10.228] [AfterEach] [k8s.io] InitContainer [NodeConformance]
I0720 18:49:10.228]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0720 18:49:10.228] Jul 20 18:25:47.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 3 lines ...
I0720 18:49:10.228] Jul 20 18:25:53.511: INFO: namespace e2e-tests-init-container-2q5dc deletion completed in 6.046005766s
I0720 18:49:10.228] 
I0720 18:49:10.228] 
I0720 18:49:10.229] • [SLOW TEST:8.805 seconds]
I0720 18:49:10.229] [k8s.io] InitContainer [NodeConformance]
I0720 18:49:10.229] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0720 18:49:10.229]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0720 18:49:10.229]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:699
I0720 18:49:10.229] ------------------------------
I0720 18:49:10.229] SS
I0720 18:49:10.229] ------------------------------
I0720 18:49:10.229] [BeforeEach] [sig-storage] ConfigMap
I0720 18:49:10.230]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
... skipping 1042 lines ...
I0720 18:49:10.358] Jul 20 18:22:52.720: INFO: Skipping waiting for service account
I0720 18:49:10.358] [It] should be able to pull from private registry with credential provider [NodeConformance]
I0720 18:49:10.358]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/runtime_conformance_test.go:69
I0720 18:49:10.358] STEP: create the container
I0720 18:49:10.358] STEP: check the container status
I0720 18:49:10.358] STEP: delete the container
I0720 18:49:10.359] Jul 20 18:27:53.625: INFO: No.1 attempt failed: expected container state: Running, got: "Waiting", retrying...
I0720 18:49:10.359] STEP: create the container
I0720 18:49:10.359] STEP: check the container status
I0720 18:49:10.359] STEP: delete the container
I0720 18:49:10.359] [AfterEach] [k8s.io] Container Runtime Conformance Test
I0720 18:49:10.359]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0720 18:49:10.359] Jul 20 18:27:55.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 448 lines ...
I0720 18:49:10.415] STEP: Building a namespace api object, basename kubelet-test
I0720 18:49:10.415] Jul 20 18:28:31.586: INFO: Skipping waiting for service account
I0720 18:49:10.415] [BeforeEach] [k8s.io] Kubelet
I0720 18:49:10.415]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:37
I0720 18:49:10.415] [BeforeEach] when scheduling a busybox command that always fails in a pod
I0720 18:49:10.415]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:80
I0720 18:49:10.415] [It] should have an error terminated reason [NodeConformance]
I0720 18:49:10.415]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:100
I0720 18:49:10.415] [AfterEach] [k8s.io] Kubelet
I0720 18:49:10.416]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0720 18:49:10.416] Jul 20 18:28:35.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0720 18:49:10.416] STEP: Destroying namespace "e2e-tests-kubelet-test-fbrdb" for this suite.
I0720 18:49:10.416] Jul 20 18:28:43.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 3 lines ...
I0720 18:49:10.416] 
I0720 18:49:10.416] • [SLOW TEST:12.061 seconds]
I0720 18:49:10.417] [k8s.io] Kubelet
I0720 18:49:10.417] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:694
I0720 18:49:10.417]   when scheduling a busybox command that always fails in a pod
I0720 18:49:10.417]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:77
I0720 18:49:10.417]     should have an error terminated reason [NodeConformance]
I0720 18:49:10.417]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/kubelet_test.go:100
I0720 18:49:10.417] ------------------------------
I0720 18:49:10.417] SSSS
I0720 18:49:10.418] ------------------------------
I0720 18:49:10.418] [BeforeEach] [k8s.io] Kubelet
I0720 18:49:10.418]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:147
... skipping 374 lines ...
I0720 18:49:10.461] Jul 20 18:28:30.769: INFO: Skipping waiting for service account
I0720 18:49:10.462] [It] should not be able to pull from private registry without secret [NodeConformance]
I0720 18:49:10.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:299
I0720 18:49:10.462] STEP: create the container
I0720 18:49:10.462] STEP: check the container status
I0720 18:49:10.462] STEP: delete the container
I0720 18:49:10.462] Jul 20 18:33:31.558: INFO: No.1 attempt failed: expected container state: Waiting, got: "Running", retrying...
I0720 18:49:10.462] STEP: create the container
I0720 18:49:10.462] STEP: check the container status
I0720 18:49:10.462] STEP: delete the container
I0720 18:49:10.463] [AfterEach] [k8s.io] Container Runtime
I0720 18:49:10.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0720 18:49:10.463] Jul 20 18:33:33.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 43 lines ...
I0720 18:49:10.468]   should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
I0720 18:49:10.468]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:649
I0720 18:49:10.468] ------------------------------
I0720 18:49:10.468] I0720 18:49:01.681971    1249 e2e_node_suite_test.go:186] Stopping node services...
I0720 18:49:10.468] I0720 18:49:01.682044    1249 server.go:258] Kill server "services"
I0720 18:49:10.468] I0720 18:49:01.682057    1249 server.go:295] Killing process 1393 (services) with -TERM
I0720 18:49:10.469] E0720 18:49:01.800350    1249 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0720 18:49:10.469] I0720 18:49:01.800370    1249 server.go:258] Kill server "kubelet"
I0720 18:49:10.469] I0720 18:49:01.874053    1249 services.go:145] Fetching log files...
I0720 18:49:10.469] I0720 18:49:01.874121    1249 services.go:154] Get log file "kern.log" with journalctl command [-k].
I0720 18:49:10.469] I0720 18:49:01.909610    1249 services.go:154] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0720 18:49:10.469] I0720 18:49:01.920510    1249 services.go:154] Get log file "docker.log" with journalctl command [-u docker].
I0720 18:49:10.469] I0720 18:49:01.938359    1249 services.go:154] Get log file "containerd.log" with journalctl command [-u containerd].
I0720 18:49:10.470] I0720 18:49:02.482444    1249 services.go:154] Get log file "kubelet.log" with journalctl command [-u kubelet-20190720T181818.service].
I0720 18:49:10.470] I0720 18:49:03.681970    1249 e2e_node_suite_test.go:191] Tests Finished
I0720 18:49:10.470] 
I0720 18:49:10.470] 
I0720 18:49:10.470] Ran 157 of 276 Specs in 1818.600 seconds
I0720 18:49:10.470] SUCCESS! -- 157 Passed | 0 Failed | 0 Flaked | 0 Pending | 119 Skipped 
I0720 18:49:10.470] 
I0720 18:49:10.470] Ginkgo ran 1 suite in 30m23.224856099s
I0720 18:49:10.470] Test Suite Passed
I0720 18:49:10.470] 
I0720 18:49:10.470] Success Finished Test Suite on Host tmp-node-e2e-2cac7f5d-cos-stable-60-9592-84-0
I0720 18:49:10.471] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0720 18:49:10.578] 2019/07/20 18:49:10 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml' finished in 35m31.252580321s
W0720 18:49:10.578] 2019/07/20 18:49:10 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0720 18:49:10.578] 2019/07/20 18:49:10 node.go:52: Noop - Node Down()
W0720 18:49:10.578] 2019/07/20 18:49:10 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0720 18:49:10.578] 2019/07/20 18:49:10 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0720 18:49:10.816] 2019/07/20 18:49:10 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 238.007684ms
W0720 18:49:10.817] 2019/07/20 18:49:10 main.go:316: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1]
W0720 18:49:10.821] Traceback (most recent call last):
W0720 18:49:10.821]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0720 18:49:10.821]     main(parse_args())
W0720 18:49:10.821]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0720 18:49:10.822]     mode.start(runner_args)
W0720 18:49:10.822]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0720 18:49:10.822]     check_env(env, self.command, *args)
W0720 18:49:10.822]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0720 18:49:10.822]     subprocess.check_call(cmd, env=env)
W0720 18:49:10.823]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0720 18:49:10.823]     raise CalledProcessError(retcode, cmd)
W0720 18:49:10.824] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml', '--gcp-project=cri-containerd-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd\\"]}"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeConformance\\]" --skip="\\[Flaky\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m')' returned non-zero exit status 1
E0720 18:49:10.833] Command failed
I0720 18:49:10.834] process 326 exited with code 1 after 35.6m
E0720 18:49:10.834] FAIL: ci-containerd-node-e2e-1-2
I0720 18:49:10.834] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0720 18:49:11.377] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0720 18:49:11.439] process 25398 exited with code 0 after 0.0m
I0720 18:49:11.439] Call:  gcloud config get-value account
I0720 18:49:11.751] process 25410 exited with code 0 after 0.0m
I0720 18:49:11.751] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0720 18:49:11.751] Upload result and artifacts...
I0720 18:49:11.751] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1152642415954235394
I0720 18:49:11.752] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1152642415954235394/artifacts
W0720 18:49:12.791] CommandException: One or more URLs matched no objects.
E0720 18:49:12.913] Command failed
I0720 18:49:12.914] process 25422 exited with code 1 after 0.0m
W0720 18:49:12.914] Remote dir gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1152642415954235394/artifacts not exist yet
I0720 18:49:12.914] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-1-2/1152642415954235394/artifacts
I0720 18:49:15.477] process 25564 exited with code 0 after 0.0m
I0720 18:49:15.478] Call:  git rev-parse HEAD
I0720 18:49:15.483] process 26139 exited with code 0 after 0.0m
... skipping 13 lines ...