This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 40 succeeded
Started2019-07-16 15:15
Elapsed8m50s
Revision
Buildergke-prow-ssd-pool-1a225945-577q
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/f4ff9616-2d4a-4446-8303-7c7740884ae3/targets/test'}}
pod71463728-a7dc-11e9-bf1c-d671f3ed8202
resultstorehttps://source.cloud.google.com/results/invocations/f4ff9616-2d4a-4446-8303-7c7740884ae3/targets/test
infra-commitb22832fd7
job-versionv1.12.11-beta.0.1+5f799a487b70ae
pod71463728-a7dc-11e9-bf1c-d671f3ed8202
repok8s.io/kubernetes
repo-commit5f799a487b70aea5e298e5f5f1e3bac904b54ef6
repos{u'k8s.io/kubernetes': u'release-1.12', u'github.com/containerd/cri': u'release/1.2'}
revisionv1.12.11-beta.0.1+5f799a487b70ae

Test Failures


Node Tests 7m42s

error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeFeature:.+\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 40 Passed Tests

Show 243 Skipped Tests

Error lines from build-log.txt

... skipping 169 lines ...
W0716 15:17:04.264]       [Service]
W0716 15:17:04.264]       Type=oneshot
W0716 15:17:04.264]       RemainAfterExit=yes
W0716 15:17:04.264]       ExecStartPre=/bin/mkdir -p /home/containerd
W0716 15:17:04.264]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0716 15:17:04.264]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0716 15:17:04.265]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0716 15:17:04.265]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0716 15:17:04.265]       ExecStart=/home/containerd/configure.sh
W0716 15:17:04.265] 
W0716 15:17:04.265]       [Install]
W0716 15:17:04.265]       WantedBy=containerd.target
W0716 15:17:04.265] 
... skipping 74 lines ...
W0716 15:17:04.275] # fetch_metadata fetches metadata from GCE metadata server.
W0716 15:17:04.275] # Var set:
W0716 15:17:04.276] # 1. Metadata key: key of the metadata.
W0716 15:17:04.276] fetch_metadata() {
W0716 15:17:04.276]   local -r key=$1
W0716 15:17:04.276]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0716 15:17:04.276]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0716 15:17:04.276]     grep -q "^${key}$"; then
W0716 15:17:04.277]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0716 15:17:04.277]       "${attributes}/${key}"
W0716 15:17:04.277]   fi
W0716 15:17:04.277] }
W0716 15:17:04.277] 
W0716 15:17:04.277] # fetch_env fetches environment variables from GCE metadata server
W0716 15:17:04.277] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0716 15:17:04.286]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0716 15:17:04.286]     deploy_path="${deploy_path}/${deploy_dir}"
W0716 15:17:04.286]   fi
W0716 15:17:04.287] 
W0716 15:17:04.287]   # TODO(random-liu): Put version into the metadata instead of
W0716 15:17:04.287]   # deciding it in cloud init. This may cause issue to reboot test.
W0716 15:17:04.287]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0716 15:17:04.287]     https://storage.googleapis.com/${deploy_path}/latest)
W0716 15:17:04.287] fi
W0716 15:17:04.287] 
W0716 15:17:04.288] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0716 15:17:04.288] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0716 15:17:04.288] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 140 lines ...
W0716 15:17:04.307]       [Service]
W0716 15:17:04.307]       Type=oneshot
W0716 15:17:04.307]       RemainAfterExit=yes
W0716 15:17:04.308]       ExecStartPre=/bin/mkdir -p /home/containerd
W0716 15:17:04.308]       ExecStartPre=/bin/mount --bind /home/containerd /home/containerd
W0716 15:17:04.308]       ExecStartPre=/bin/mount -o remount,exec /home/containerd
W0716 15:17:04.308]       ExecStartPre=/usr/bin/curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" -o /home/containerd/configure.sh http://metadata.google.internal/computeMetadata/v1/instance/attributes/containerd-configure-sh
W0716 15:17:04.308]       ExecStartPre=/bin/chmod 544 /home/containerd/configure.sh
W0716 15:17:04.309]       ExecStart=/home/containerd/configure.sh
W0716 15:17:04.309] 
W0716 15:17:04.309]       [Install]
W0716 15:17:04.309]       WantedBy=containerd.target
W0716 15:17:04.309] 
... skipping 74 lines ...
W0716 15:17:04.319] # fetch_metadata fetches metadata from GCE metadata server.
W0716 15:17:04.319] # Var set:
W0716 15:17:04.320] # 1. Metadata key: key of the metadata.
W0716 15:17:04.320] fetch_metadata() {
W0716 15:17:04.320]   local -r key=$1
W0716 15:17:04.320]   local -r attributes="http://metadata.google.internal/computeMetadata/v1/instance/attributes"
W0716 15:17:04.320]   if curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" "${attributes}/" | \
W0716 15:17:04.320]     grep -q "^${key}$"; then
W0716 15:17:04.321]     curl --fail --retry 5 --retry-delay 3 --silent --show-error -H "X-Google-Metadata-Request: True" \
W0716 15:17:04.321]       "${attributes}/${key}"
W0716 15:17:04.321]   fi
W0716 15:17:04.321] }
W0716 15:17:04.321] 
W0716 15:17:04.321] # fetch_env fetches environment variables from GCE metadata server
W0716 15:17:04.321] # and generate a env file under ${CONTAINERD_HOME}. It assumes that
... skipping 59 lines ...
W0716 15:17:04.330]     deploy_dir=$(echo "${pull_refs}" | sha1sum | awk '{print $1}')
W0716 15:17:04.330]     deploy_path="${deploy_path}/${deploy_dir}"
W0716 15:17:04.330]   fi
W0716 15:17:04.330] 
W0716 15:17:04.330]   # TODO(random-liu): Put version into the metadata instead of
W0716 15:17:04.330]   # deciding it in cloud init. This may cause issue to reboot test.
W0716 15:17:04.331]   version=$(curl -f --ipv4 --retry 6 --retry-delay 3 --silent --show-error \
W0716 15:17:04.331]     https://storage.googleapis.com/${deploy_path}/latest)
W0716 15:17:04.331] fi
W0716 15:17:04.331] 
W0716 15:17:04.331] TARBALL_GCS_NAME="${pkg_prefix}-${version}.linux-amd64.tar.gz"
W0716 15:17:04.331] # TARBALL_GCS_PATH is the path to download cri-containerd tarball for node e2e.
W0716 15:17:04.331] TARBALL_GCS_PATH="https://storage.googleapis.com/${deploy_path}/${TARBALL_GCS_NAME}"
... skipping 129 lines ...
W0716 15:20:50.149] I0716 15:20:50.149322    4479 utils.go:82] Configure iptables firewall rules on "tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1"
W0716 15:20:51.316] I0716 15:20:51.316176    4479 utils.go:117] Killing any existing node processes on "tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1"
W0716 15:20:51.389] I0716 15:20:51.388814    4479 utils.go:117] Killing any existing node processes on "tmp-node-e2e-c47c32b3-cos-stable-60-9592-84-0"
W0716 15:20:52.503] I0716 15:20:52.503104    4479 node_e2e.go:164] Starting tests on "tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1"
W0716 15:20:52.527] I0716 15:20:52.527374    4479 node_e2e.go:108] GCI/COS node and GCI/COS mounter both detected, modifying --experimental-mounter-path accordingly
W0716 15:20:52.528] I0716 15:20:52.527453    4479 node_e2e.go:164] Starting tests on "tmp-node-e2e-c47c32b3-cos-stable-60-9592-84-0"
W0716 15:21:00.801] I0716 15:21:00.801174    4479 remote.go:197] Test failed unexpectedly. Attempting to retrieving system logs (only works for nodes with journald)
W0716 15:21:01.427] I0716 15:21:01.427117    4479 remote.go:202] Got the system logs from journald; copying it back...
W0716 15:21:02.275] I0716 15:21:02.275026    4479 remote.go:122] Copying test artifacts from "tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1"
W0716 15:21:03.500] I0716 15:21:03.499548    4479 run_remote.go:717] Deleting instance "tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1"
I0716 15:21:04.447] 
I0716 15:21:04.447] >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I0716 15:21:04.447] >                              START TEST                                >
... skipping 46 lines ...
I0716 15:21:04.454] I0716 15:20:56.638326    2836 validators.go:44] Validating package...
I0716 15:21:04.454] PASS
I0716 15:21:04.454] I0716 15:20:56.641458    2773 e2e_node_suite_test.go:149] Pre-pulling images so that they are cached for the tests.
I0716 15:21:04.454] I0716 15:20:56.641501    2773 remote_runtime.go:43] Connecting to runtime service unix:///run/containerd/containerd.sock
I0716 15:21:04.454] I0716 15:20:56.641667    2773 remote_image.go:41] Connecting to image service unix:///run/containerd/containerd.sock
I0716 15:21:04.455] I0716 15:20:56.641701    2773 image_list.go:146] Pre-pulling images with CRI [docker.io/library/busybox:1.29 docker.io/library/nginx:1.14-alpine gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0 gcr.io/kubernetes-e2e-test-images/hostexec:1.1 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0 gcr.io/kubernetes-e2e-test-images/liveness:1.0 gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0 gcr.io/kubernetes-e2e-test-images/mounttest:1.0 gcr.io/kubernetes-e2e-test-images/net:1.0 gcr.io/kubernetes-e2e-test-images/netexec:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is-amd64:1.0 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/test-webserver:1.0 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0 google/cadvisor:latest k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff k8s.gcr.io/node-problem-detector:v0.6.2 k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa k8s.gcr.io/pause:3.1 k8s.gcr.io/stress:v1]
I0716 15:21:04.455] E0716 15:20:56.641899    2773 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.456] E0716 15:20:56.641922    2773 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.456] W0716 15:20:56.641936    2773 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (1 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.456] E0716 15:20:57.642047    2773 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.456] E0716 15:20:57.642090    2773 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.457] W0716 15:20:57.642101    2773 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (2 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.457] E0716 15:20:58.642327    2773 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.457] E0716 15:20:58.642384    2773 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.457] W0716 15:20:58.642396    2773 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (3 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.457] E0716 15:20:59.642621    2773 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.458] E0716 15:20:59.642677    2773 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.458] W0716 15:20:59.642688    2773 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (4 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.458] E0716 15:21:00.642894    2773 remote_image.go:87] ImageStatus "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.458] E0716 15:21:00.642948    2773 remote_image.go:112] PullImage "docker.io/library/busybox:1.29" from image service failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.459] W0716 15:21:00.642959    2773 image_list.go:159] Failed to pull docker.io/library/busybox:1.29 as user "root", retrying in 1s (5 of 5): rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.459] W0716 15:21:00.642967    2773 image_list.go:163] Could not pre-pull image docker.io/library/busybox:1.29 rpc error: code = Unavailable desc = grpc: the connection is unavailable output: 
I0716 15:21:04.459] 
I0716 15:21:04.459] 
I0716 15:21:04.459] Failure [4.698 seconds]
I0716 15:21:04.459] [BeforeSuite] BeforeSuite 
I0716 15:21:04.459] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.460] 
I0716 15:21:04.460]   Expected error:
I0716 15:21:04.460]       <*status.statusError | 0xc420f2e780>: {
I0716 15:21:04.460]           Code: 14,
I0716 15:21:04.460]           Message: "grpc: the connection is unavailable",
I0716 15:21:04.460]           Details: nil,
I0716 15:21:04.460]       }
I0716 15:21:04.460]       rpc error: code = Unavailable desc = grpc: the connection is unavailable
I0716 15:21:04.460]   not to have occurred
I0716 15:21:04.460] 
I0716 15:21:04.461]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:152
I0716 15:21:04.461] ------------------------------
I0716 15:21:04.461] Failure [4.615 seconds]
I0716 15:21:04.461] [BeforeSuite] BeforeSuite 
I0716 15:21:04.461] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.461] 
I0716 15:21:04.461]   BeforeSuite on Node 1 failed
I0716 15:21:04.461] 
I0716 15:21:04.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.462] ------------------------------
I0716 15:21:04.462] Failure [4.707 seconds]
I0716 15:21:04.462] [BeforeSuite] BeforeSuite 
I0716 15:21:04.462] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.462] 
I0716 15:21:04.462]   BeforeSuite on Node 1 failed
I0716 15:21:04.462] 
I0716 15:21:04.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.463] ------------------------------
I0716 15:21:04.463] Failure [4.685 seconds]
I0716 15:21:04.463] [BeforeSuite] BeforeSuite 
I0716 15:21:04.463] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.463] 
I0716 15:21:04.463]   BeforeSuite on Node 1 failed
I0716 15:21:04.463] 
I0716 15:21:04.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.464] ------------------------------
I0716 15:21:04.464] Failure [4.786 seconds]
I0716 15:21:04.464] [BeforeSuite] BeforeSuite 
I0716 15:21:04.464] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.464] 
I0716 15:21:04.464]   BeforeSuite on Node 1 failed
I0716 15:21:04.464] 
I0716 15:21:04.464]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.464] ------------------------------
I0716 15:21:04.465] Failure [4.683 seconds]
I0716 15:21:04.465] [BeforeSuite] BeforeSuite 
I0716 15:21:04.465] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.465] 
I0716 15:21:04.465]   BeforeSuite on Node 1 failed
I0716 15:21:04.465] 
I0716 15:21:04.465]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.465] ------------------------------
I0716 15:21:04.465] Failure [4.736 seconds]
I0716 15:21:04.465] [BeforeSuite] BeforeSuite 
I0716 15:21:04.466] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.466] 
I0716 15:21:04.466]   BeforeSuite on Node 1 failed
I0716 15:21:04.466] 
I0716 15:21:04.466]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.466] ------------------------------
I0716 15:21:04.466] Failure [4.668 seconds]
I0716 15:21:04.466] [BeforeSuite] BeforeSuite 
I0716 15:21:04.467] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.467] 
I0716 15:21:04.467]   BeforeSuite on Node 1 failed
I0716 15:21:04.467] 
I0716 15:21:04.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:142
I0716 15:21:04.467] ------------------------------
I0716 15:21:04.467] I0716 15:21:00.751414    2773 e2e_node_suite_test.go:191] Tests Finished
I0716 15:21:04.467] 
I0716 15:21:04.467] 
I0716 15:21:04.467] Ran 2208 of 0 Specs in 4.848 seconds
I0716 15:21:04.468] FAIL! -- 0 Passed | 2208 Failed | 0 Flaked | 0 Pending | 0 Skipped 
I0716 15:21:04.468] 
I0716 15:21:04.468] Ginkgo ran 1 suite in 7.62335429s
I0716 15:21:04.468] Test Suite Failed
I0716 15:21:04.468] 
I0716 15:21:04.469] Failure Finished Test Suite on Host tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1
I0716 15:21:04.470] [command [ssh -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine prow@35.247.113.65 -- sudo sh -c 'cd /tmp/node-e2e-20190716T152040 && timeout -k 30s 3900.000000s ./ginkgo --nodes=8 --focus="\[NodeFeature:.+\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 ./e2e_node.test -- --system-spec-name= --system-spec-file= --extra-envs= --logtostderr --v 4 --node-name=tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1 --report-dir=/tmp/node-e2e-20190716T152040/results --report-prefix=ubuntu --image-description="ubuntu-gke-1604-xenial-v20170420-1" --kubelet-flags=--experimental-kernel-memcg-notification=true --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"'] failed with error: exit status 1, command [scp -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o CheckHostIP=no -o StrictHostKeyChecking=no -o ServerAliveInterval=30 -o LogLevel=ERROR -i /workspace/.ssh/google_compute_engine -r prow@35.247.113.65:/tmp/node-e2e-20190716T152040/results/*.log /workspace/_artifacts/tmp-node-e2e-c47c32b3-ubuntu-gke-1604-xenial-v20170420-1] failed with error: exit status 1]
I0716 15:21:04.471] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0716 15:21:04.471] <                              FINISH TEST                               <
I0716 15:21:04.471] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
I0716 15:21:04.471] 
W0716 15:24:09.738] I0716 15:24:09.737701    4479 remote.go:122] Copying test artifacts from "tmp-node-e2e-c47c32b3-cos-stable-60-9592-84-0"
W0716 15:24:14.152] I0716 15:24:14.152468    4479 run_remote.go:717] Deleting instance "tmp-node-e2e-c47c32b3-cos-stable-60-9592-84-0"
... skipping 105 lines ...
I0716 15:24:15.021] Jul 16 15:22:20.120: INFO: Skipping waiting for service account
I0716 15:24:15.021] [BeforeEach] [k8s.io] Sysctls [NodeFeature:Sysctls]
I0716 15:24:15.021]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:56
I0716 15:24:15.021] [It] should not launch unsafe, but not explicitly enabled sysctls on the node
I0716 15:24:15.022]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:181
I0716 15:24:15.022] STEP: Creating a pod with a greylisted, but not whitelisted sysctl on the node
I0716 15:24:15.022] STEP: Watching for error events or started pod
I0716 15:24:15.022] STEP: Checking that the pod was rejected
I0716 15:24:15.022] [AfterEach] [k8s.io] Sysctls [NodeFeature:Sysctls]
I0716 15:24:15.022]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
I0716 15:24:15.022] Jul 16 15:22:22.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0716 15:24:15.022] STEP: Destroying namespace "e2e-tests-sysctl-rfhxj" for this suite.
I0716 15:24:15.023] Jul 16 15:22:28.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 523 lines ...
I0716 15:24:15.089] Jul 16 15:22:40.684: INFO: Skipping waiting for service account
I0716 15:24:15.090] [BeforeEach] [k8s.io] Sysctls [NodeFeature:Sysctls]
I0716 15:24:15.090]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:56
I0716 15:24:15.090] [It] should support sysctls
I0716 15:24:15.090]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:60
I0716 15:24:15.090] STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
I0716 15:24:15.090] STEP: Watching for error events or started pod
I0716 15:24:15.090] STEP: Waiting for pod completion
I0716 15:24:15.090] STEP: Checking that the pod succeeded
I0716 15:24:15.090] STEP: Getting logs from the pod
I0716 15:24:15.090] STEP: Checking that the sysctl is actually updated
I0716 15:24:15.091] [AfterEach] [k8s.io] Sysctls [NodeFeature:Sysctls]
I0716 15:24:15.091]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
... skipping 542 lines ...
I0716 15:24:15.156] Jul 16 15:23:01.139: INFO: Skipping waiting for service account
I0716 15:24:15.156] [BeforeEach] [k8s.io] Sysctls [NodeFeature:Sysctls]
I0716 15:24:15.157]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:56
I0716 15:24:15.157] [It] should support unsafe sysctls which are actually whitelisted
I0716 15:24:15.157]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:103
I0716 15:24:15.157] STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
I0716 15:24:15.157] STEP: Watching for error events or started pod
I0716 15:24:15.157] STEP: Waiting for pod completion
I0716 15:24:15.157] STEP: Checking that the pod succeeded
I0716 15:24:15.157] STEP: Getting logs from the pod
I0716 15:24:15.157] STEP: Checking that the sysctl is actually updated
I0716 15:24:15.158] [AfterEach] [k8s.io] Sysctls [NodeFeature:Sysctls]
I0716 15:24:15.158]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:148
... skipping 94 lines ...
I0716 15:24:15.169] STEP: Wait for 0 temp events generated
I0716 15:24:15.169] STEP: Wait for 0 total events generated
I0716 15:24:15.169] STEP: Make sure only 0 total events generated
I0716 15:24:15.169] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.169] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.169] STEP: should not generate events for too old log
I0716 15:24:15.169] STEP: Inject 3 logs: "temporary error"
I0716 15:24:15.169] STEP: Wait for 0 temp events generated
I0716 15:24:15.169] STEP: Wait for 0 total events generated
I0716 15:24:15.169] STEP: Make sure only 0 total events generated
I0716 15:24:15.169] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.170] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.170] STEP: should not change node condition for too old log
I0716 15:24:15.170] STEP: Inject 1 logs: "permanent error 1"
I0716 15:24:15.170] STEP: Wait for 0 temp events generated
I0716 15:24:15.170] STEP: Wait for 0 total events generated
I0716 15:24:15.170] STEP: Make sure only 0 total events generated
I0716 15:24:15.170] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.170] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.170] STEP: should generate event for old log within lookback duration
I0716 15:24:15.170] STEP: Inject 3 logs: "temporary error"
I0716 15:24:15.171] STEP: Wait for 3 temp events generated
I0716 15:24:15.171] STEP: Wait for 3 total events generated
I0716 15:24:15.171] STEP: Make sure only 3 total events generated
I0716 15:24:15.171] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.171] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.171] STEP: should change node condition for old log within lookback duration
I0716 15:24:15.171] STEP: Inject 1 logs: "permanent error 1"
I0716 15:24:15.171] STEP: Wait for 3 temp events generated
I0716 15:24:15.171] STEP: Wait for 4 total events generated
I0716 15:24:15.171] STEP: Make sure only 4 total events generated
I0716 15:24:15.172] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.172] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.172] STEP: should generate event for new log
I0716 15:24:15.172] STEP: Inject 3 logs: "temporary error"
I0716 15:24:15.172] STEP: Wait for 6 temp events generated
I0716 15:24:15.172] STEP: Wait for 7 total events generated
I0716 15:24:15.172] STEP: Make sure only 7 total events generated
I0716 15:24:15.172] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.172] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.172] STEP: should not update node condition with the same reason
I0716 15:24:15.173] STEP: Inject 1 logs: "permanent error 1different message"
I0716 15:24:15.173] STEP: Wait for 6 temp events generated
I0716 15:24:15.173] STEP: Wait for 7 total events generated
I0716 15:24:15.173] STEP: Make sure only 7 total events generated
I0716 15:24:15.173] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.173] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.173] STEP: should change node condition for new log
I0716 15:24:15.173] STEP: Inject 1 logs: "permanent error 2"
I0716 15:24:15.173] STEP: Wait for 6 temp events generated
I0716 15:24:15.173] STEP: Wait for 8 total events generated
I0716 15:24:15.174] STEP: Make sure only 8 total events generated
I0716 15:24:15.174] STEP: Make sure node condition "TestCondition" is set
I0716 15:24:15.174] STEP: Make sure node condition "TestCondition" is stable
I0716 15:24:15.174] [AfterEach] [k8s.io] SystemLogMonitor
... skipping 22 lines ...
I0716 15:24:15.176]     should generate node condition and events for corresponding errors
I0716 15:24:15.177]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e_node/node_problem_detector_linux.go:245
I0716 15:24:15.177] ------------------------------
I0716 15:24:15.177] I0716 15:24:09.193448    1250 e2e_node_suite_test.go:186] Stopping node services...
I0716 15:24:15.177] I0716 15:24:09.193470    1250 server.go:258] Kill server "services"
I0716 15:24:15.177] I0716 15:24:09.193481    1250 server.go:295] Killing process 1393 (services) with -TERM
I0716 15:24:15.177] E0716 15:24:09.286556    1250 services.go:88] Failed to stop services: error stopping "services": waitid: no child processes
I0716 15:24:15.177] I0716 15:24:09.286577    1250 server.go:258] Kill server "kubelet"
I0716 15:24:15.177] I0716 15:24:09.359120    1250 services.go:145] Fetching log files...
I0716 15:24:15.178] I0716 15:24:09.359276    1250 services.go:154] Get log file "kern.log" with journalctl command [-k].
I0716 15:24:15.178] I0716 15:24:09.402591    1250 services.go:154] Get log file "cloud-init.log" with journalctl command [-u cloud*].
I0716 15:24:15.178] I0716 15:24:09.412049    1250 services.go:154] Get log file "docker.log" with journalctl command [-u docker].
I0716 15:24:15.178] I0716 15:24:09.416497    1250 services.go:154] Get log file "containerd.log" with journalctl command [-u containerd].
I0716 15:24:15.178] I0716 15:24:09.520361    1250 services.go:154] Get log file "kubelet.log" with journalctl command [-u kubelet-20190716T152040.service].
I0716 15:24:15.178] I0716 15:24:09.694916    1250 e2e_node_suite_test.go:191] Tests Finished
I0716 15:24:15.178] 
I0716 15:24:15.178] 
I0716 15:24:15.178] Ran 33 of 276 Specs in 192.219 seconds
I0716 15:24:15.179] SUCCESS! -- 33 Passed | 0 Failed | 0 Flaked | 0 Pending | 243 Skipped 
I0716 15:24:15.179] 
I0716 15:24:15.179] Ginkgo ran 1 suite in 3m16.581855323s
I0716 15:24:15.179] Test Suite Passed
I0716 15:24:15.179] 
I0716 15:24:15.179] Success Finished Test Suite on Host tmp-node-e2e-c47c32b3-cos-stable-60-9592-84-0
I0716 15:24:15.179] <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
... skipping 5 lines ...
W0716 15:24:15.281] 2019/07/16 15:24:15 process.go:155: Step 'go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeFeature:.+\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml' finished in 7m42.059319729s
W0716 15:24:15.282] 2019/07/16 15:24:15 node.go:42: Noop - Node DumpClusterLogs() - /workspace/_artifacts: 
W0716 15:24:15.282] 2019/07/16 15:24:15 node.go:52: Noop - Node Down()
W0716 15:24:15.282] 2019/07/16 15:24:15 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml.
W0716 15:24:15.282] 2019/07/16 15:24:15 process.go:153: Running: bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"
W0716 15:24:15.415] 2019/07/16 15:24:15 process.go:155: Step 'bash -c . hack/lib/version.sh && KUBE_ROOT=. kube::version::get_version_vars && echo "${KUBE_GIT_VERSION-}"' finished in 184.702729ms
W0716 15:24:15.416] 2019/07/16 15:24:15 main.go:316: Something went wrong: encountered 1 errors: [error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=cri-containerd-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeFeature:.+\]" --skip="\[Flaky\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml: exit status 1]
W0716 15:24:15.420] Traceback (most recent call last):
W0716 15:24:15.420]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 778, in <module>
W0716 15:24:15.420]     main(parse_args())
W0716 15:24:15.421]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 626, in main
W0716 15:24:15.421]     mode.start(runner_args)
W0716 15:24:15.421]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 262, in start
W0716 15:24:15.421]     check_env(env, self.command, *args)
W0716 15:24:15.422]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env
W0716 15:24:15.422]     subprocess.check_call(cmd, env=env)
W0716 15:24:15.422]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0716 15:24:15.422]     raise CalledProcessError(retcode, cmd)
W0716 15:24:15.423] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--test', '--deployment=node', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--node-args=--image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/containerd-release-1.2/image-config.yaml', '--gcp-project=cri-containerd-node-e2e', '--gcp-zone=us-west1-b', '--node-test-args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\\"name\\": \\"containerd.log\\", \\"journalctl\\": [\\"-u\\", \\"containerd\\"]}"', '--node-tests=true', '--test_args=--nodes=8 --focus="\\[NodeFeature:.+\\]" --skip="\\[Flaky\\]|\\[Serial\\]" --flakeAttempts=2', '--timeout=65m')' returned non-zero exit status 1
E0716 15:24:15.426] Command failed
I0716 15:24:15.426] process 326 exited with code 1 after 7.7m
E0716 15:24:15.426] FAIL: ci-containerd-node-e2e-features-1-2
I0716 15:24:15.427] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0716 15:24:15.892] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0716 15:24:15.939] process 25641 exited with code 0 after 0.0m
I0716 15:24:15.940] Call:  gcloud config get-value account
I0716 15:24:16.221] process 25653 exited with code 0 after 0.0m
I0716 15:24:16.222] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0716 15:24:16.222] Upload result and artifacts...
I0716 15:24:16.222] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-containerd-node-e2e-features-1-2/1151148315560120320
I0716 15:24:16.222] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-features-1-2/1151148315560120320/artifacts
W0716 15:24:17.195] CommandException: One or more URLs matched no objects.
E0716 15:24:17.310] Command failed
I0716 15:24:17.310] process 25665 exited with code 1 after 0.0m
W0716 15:24:17.310] Remote dir gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-features-1-2/1151148315560120320/artifacts not exist yet
I0716 15:24:17.310] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-containerd-node-e2e-features-1-2/1151148315560120320/artifacts
I0716 15:24:19.311] process 25807 exited with code 0 after 0.0m
I0716 15:24:19.312] Call:  git rev-parse HEAD
I0716 15:24:19.316] process 26382 exited with code 0 after 0.0m
... skipping 13 lines ...