This job view page is being replaced by Spyglass soon. Check out the new job view.
PRoomichi: Update links to hack scripts
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2020-09-01 18:48
Elapsed15m47s
Revision2b1de2350af4a606ca9ec30fa0bc4e8fcdc6290e
Refs 1833

No Test Failures!


Error lines from build-log.txt

... skipping 252 lines ...
Analyzing: 4 targets (21 packages loaded, 27 targets configured)
Analyzing: 4 targets (376 packages loaded, 644 targets configured)
Analyzing: 4 targets (1588 packages loaded, 12436 targets configured)
Analyzing: 4 targets (2156 packages loaded, 15024 targets configured)
Analyzing: 4 targets (2156 packages loaded, 15024 targets configured)
Analyzing: 4 targets (2157 packages loaded, 15024 targets configured)
DEBUG: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/bazel_gazelle/internal/go_repository.bzl:184:13: org_golang_x_tools: gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/cmd/fiximports/testdata/src/old.com/bad/bad.go:2:43: expected 'package', found 'EOF'
gazelle: found packages nointerface (nointerface.go) and pointer (pointer.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gccgoimporter/testdata
gazelle: found packages a (a.go) and b (b.go) in /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/internal/gcimporter/testdata
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/loader/testdata/badpkgdecl.go:1:34: expected 'package', found 'EOF'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/geez/help.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/v2/me.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/extra/yo.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/tempmod/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.0.0/main.go:1:16: expected ';', found '.'
gazelle: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go: error reading go file: /bazel-scratch/.cache/bazel/_bazel_root/cae228f2a89ef5ee47c2085e441a3561/external/org_golang_x_tools/go/packages/packagestest/testdata/groups/two/modules/example.com/what@v1.1.0/main.go:1:16: expected ';', found '.'
gazelle: finding module path for import domain.name/importdecl: exit status 1: can't load package: package domain.name/importdecl: cannot find module providing package domain.name/importdecl
gazelle: finding module path for import old.com/one: exit status 1: can't load package: package old.com/one: cannot find module providing package old.com/one
gazelle: finding module path for import titanic.biz/bar: exit status 1: can't load package: package titanic.biz/bar: cannot find module providing package titanic.biz/bar
gazelle: finding module path for import titanic.biz/foo: exit status 1: can't load package: package titanic.biz/foo: cannot find module providing package titanic.biz/foo
gazelle: finding module path for import fruit.io/pear: exit status 1: can't load package: package fruit.io/pear: cannot find module providing package fruit.io/pear
gazelle: finding module path for import fruit.io/banana: exit status 1: can't load package: package fruit.io/banana: cannot find module providing package fruit.io/banana
... skipping 170 lines ...
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 60)
WARNING: Waiting for server process to terminate (waited 30 seconds, waiting at most 60)
INFO: Waited 60 seconds for server process (pid=8193) to terminate.
WARNING: Waiting for server process to terminate (waited 5 seconds, waiting at most 10)
WARNING: Waiting for server process to terminate (waited 10 seconds, waiting at most 10)
INFO: Waited 10 seconds for server process (pid=8193) to terminate.
FATAL: Attempted to kill stale server process (pid=8193) using SIGKILL, but it did not die in a timely fashion.
+ true
+ pkill ^bazel
+ true
+ mkdir -p _output/bin/
+ cp bazel-bin/test/e2e/e2e.test _output/bin/
+ find /home/prow/go/src/k8s.io/kubernetes/bazel-bin/ -name kubectl -type f
... skipping 50 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
    provider-id: kind://docker/kind/kind-worker2
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.2
    provider-id: kind://docker/kind/kind-worker2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
... skipping 38 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.3
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
    provider-id: kind://docker/kind/kind-worker
---
apiVersion: kubeadm.k8s.io/v1beta2
discovery:
  bootstrapToken:
    apiServerEndpoint: kind-control-plane:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.3
    provider-id: kind://docker/kind/kind-worker
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
... skipping 38 lines ...
localAPIEndpoint:
  advertiseAddress: 172.18.0.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
    provider-id: kind://docker/kind/kind-control-plane
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
... skipping 5 lines ...
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.18.0.4
    provider-id: kind://docker/kind/kind-control-plane
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
... skipping 73 lines ...
I0901 18:58:59.607913     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s  in 0 milliseconds
I0901 18:59:00.108065     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s  in 0 milliseconds
I0901 18:59:00.607957     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s  in 0 milliseconds
I0901 18:59:01.107942     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s  in 0 milliseconds
I0901 18:59:01.608326     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s  in 0 milliseconds
I0901 18:59:02.108744     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s  in 0 milliseconds
I0901 18:59:09.459523     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s 500 Internal Server Error in 6851 milliseconds
I0901 18:59:09.612272     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
I0901 18:59:10.112592     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
I0901 18:59:10.610213     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s 500 Internal Server Error in 2 milliseconds
[apiclient] All control plane components are healthy after 16.007733 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0901 18:59:11.112695     238 round_trippers.go:443] GET https://kind-control-plane:6443/healthz?timeout=32s 200 OK in 5 milliseconds
I0901 18:59:11.113718     238 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0901 18:59:11.120699     238 round_trippers.go:443] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps 201 Created in 5 milliseconds
I0901 18:59:11.126612     238 round_trippers.go:443] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles 201 Created in 3 milliseconds
... skipping 84 lines ...
I0901 18:59:23.700569     522 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
I0901 18:59:23.702632     522 controlplaneprepare.go:211] [download-certs] Skipping certs download
I0901 18:59:23.702664     522 join.go:433] [preflight] Discovering cluster-info
I0901 18:59:23.702763     522 token.go:188] [discovery] Trying to connect to API Server "kind-control-plane:6443"
I0901 18:59:23.704366     522 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://kind-control-plane:6443"
I0901 18:59:23.762948     522 round_trippers.go:443] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 58 milliseconds
I0901 18:59:23.763798     522 token.go:191] [discovery] Failed to connect to API Server "kind-control-plane:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I0901 18:59:28.767605     522 token.go:188] [discovery] Trying to connect to API Server "kind-control-plane:6443"
I0901 18:59:28.768365     522 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://kind-control-plane:6443"
I0901 18:59:28.782162     522 round_trippers.go:443] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 13 milliseconds
I0901 18:59:28.783384     522 token.go:103] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "kind-control-plane:6443"
I0901 18:59:28.783416     522 token.go:194] [discovery] Successfully established connection with API Server "kind-control-plane:6443"
I0901 18:59:28.783448     522 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
... skipping 63 lines ...
I0901 18:59:23.712339     522 joinconfiguration.go:75] loading configuration from "/kind/kubeadm.conf"
I0901 18:59:23.714249     522 controlplaneprepare.go:211] [download-certs] Skipping certs download
I0901 18:59:23.714282     522 join.go:433] [preflight] Discovering cluster-info
I0901 18:59:23.714342     522 token.go:188] [discovery] Trying to connect to API Server "kind-control-plane:6443"
I0901 18:59:23.714969     522 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://kind-control-plane:6443"
I0901 18:59:23.764490     522 round_trippers.go:443] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 49 milliseconds
I0901 18:59:23.767129     522 token.go:191] [discovery] Failed to connect to API Server "kind-control-plane:6443": token id "abcdef" is invalid for this cluster or it has expired. Use "kubeadm token create" on the control-plane node to create a new valid token
I0901 18:59:28.767356     522 token.go:188] [discovery] Trying to connect to API Server "kind-control-plane:6443"
I0901 18:59:28.768336     522 token.go:73] [discovery] Created cluster-info discovery client, requesting info from "https://kind-control-plane:6443"
I0901 18:59:28.778230     522 round_trippers.go:443] GET https://kind-control-plane:6443/api/v1/namespaces/kube-public/configmaps/cluster-info 200 OK in 9 milliseconds
I0901 18:59:28.779567     522 token.go:103] [discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "kind-control-plane:6443"
I0901 18:59:28.779597     522 token.go:194] [discovery] Successfully established connection with API Server "kind-control-plane:6443"
I0901 18:59:28.779625     522 discovery.go:51] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
... skipping 94 lines ...

Running in parallel across 25 nodes

Sep  1 19:00:08.972: INFO: >>> kubeConfig: /root/.kube/kind-test-config
Sep  1 19:00:08.975: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Sep  1 19:00:09.015: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep  1 19:00:09.095: INFO: The status of Pod kube-proxy-25kdz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep  1 19:00:09.095: INFO: 7 / 8 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep  1 19:00:09.095: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Sep  1 19:00:09.095: INFO: POD               NODE         PHASE    GRACE  CONDITIONS
Sep  1 19:00:09.095: INFO: kube-proxy-25kdz  kind-worker  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-01 18:59:43 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-01 18:59:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-01 18:59:58 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-01 18:59:43 +0000 UTC  }]
Sep  1 19:00:09.095: INFO: 
Sep  1 19:00:11.125: INFO: The status of Pod kube-proxy-nn5p8 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep  1 19:00:11.125: INFO: 7 / 8 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Sep  1 19:00:11.125: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Sep  1 19:00:11.125: INFO: POD               NODE         PHASE    GRACE  CONDITIONS
Sep  1 19:00:11.125: INFO: kube-proxy-nn5p8  kind-worker  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:10 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:10 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:10 +0000 UTC  }]
Sep  1 19:00:11.125: INFO: 
Sep  1 19:00:13.116: INFO: The status of Pod kube-controller-manager-kind-control-plane is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep  1 19:00:13.116: INFO: 8 / 9 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Sep  1 19:00:13.116: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Sep  1 19:00:13.116: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Sep  1 19:00:13.116: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Pending         []
Sep  1 19:00:13.116: INFO: 
Sep  1 19:00:15.135: INFO: The status of Pod kube-apiserver-kind-control-plane is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep  1 19:00:15.135: INFO: The status of Pod kube-controller-manager-kind-control-plane is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep  1 19:00:15.135: INFO: The status of Pod kube-proxy-ls4v9 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep  1 19:00:15.135: INFO: 7 / 10 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
Sep  1 19:00:15.135: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready.
Sep  1 19:00:15.135: INFO: POD                                         NODE                PHASE    GRACE  CONDITIONS
Sep  1 19:00:15.135: INFO: kube-apiserver-kind-control-plane           kind-control-plane  Pending         []
Sep  1 19:00:15.135: INFO: kube-controller-manager-kind-control-plane  kind-control-plane  Pending         []
Sep  1 19:00:15.135: INFO: kube-proxy-ls4v9                            kind-worker2        Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:14 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:14 +0000 UTC ContainersNotReady containers with unready status: [kube-proxy]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-09-01 19:00:14 +0000 UTC  }]
... skipping 1581 lines ...
Sep  1 19:01:04.095: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep  1 19:01:11.471: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
... skipping 482 lines ...
STEP: Creating a kubernetes client
Sep  1 19:01:22.611: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep  1 19:01:22.724: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep  1 19:01:30.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep  1 19:01:37.139: INFO: namespace init-container-2871 deletion completed in 6.264783892s


• [SLOW TEST:14.528 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:151
... skipping 574 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep  1 19:01:50.955: INFO: Successfully updated pod "pod-update-activedeadlineseconds-29abeb47-5081-4c7a-b6d2-539fcb83c7e8"
Sep  1 19:01:50.955: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-29abeb47-5081-4c7a-b6d2-539fcb83c7e8" in namespace "pods-8883" to be "terminated due to deadline exceeded"
Sep  1 19:01:50.960: INFO: Pod "pod-update-activedeadlineseconds-29abeb47-5081-4c7a-b6d2-539fcb83c7e8": Phase="Running", Reason="", readiness=true. Elapsed: 4.809142ms
Sep  1 19:01:52.987: INFO: Pod "pod-update-activedeadlineseconds-29abeb47-5081-4c7a-b6d2-539fcb83c7e8": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.032021894s
Sep  1 19:01:52.987: INFO: Pod "pod-update-activedeadlineseconds-29abeb47-5081-4c7a-b6d2-539fcb83c7e8" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  test/e2e/framework/framework.go:152
Sep  1 19:01:52.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8883" for this suite.
Sep  1 19:02:01.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 143 lines ...
Sep  1 19:01:38.477: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:698
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep  1 19:01:51.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8051" for this suite.
Sep  1 19:01:59.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 398 lines ...
[BeforeEach] [sig-network] Services
  test/e2e/network/service.go:91
[It] should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating service endpoint-test2 in namespace services-8617
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8617 to expose endpoints map[]
Sep  1 19:01:55.389: INFO: Get endpoints failed (3.828475ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Sep  1 19:01:56.410: INFO: successfully validated that service endpoint-test2 in namespace services-8617 exposes endpoints map[] (1.024041137s elapsed)
STEP: Creating pod pod1 in namespace services-8617
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8617 to expose endpoints map[pod1:[80]]
Sep  1 19:02:00.788: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.335732999s elapsed, will retry)
Sep  1 19:02:01.799: INFO: successfully validated that service endpoint-test2 in namespace services-8617 exposes endpoints map[pod1:[80]] (5.34620313s elapsed)
STEP: Creating pod pod2 in namespace services-8617
... skipping 60 lines ...
STEP: Creating a kubernetes client
Sep  1 19:01:07.515: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
STEP: creating the pod
Sep  1 19:01:07.612: INFO: PodSpec: initContainers in spec.initContainers
Sep  1 19:02:06.866: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-7c5eac79-17f1-4c28-a069-ec389f52219c", GenerateName:"", Namespace:"init-container-9527", SelfLink:"/api/v1/namespaces/init-container-9527/pods/pod-init-7c5eac79-17f1-4c28-a069-ec389f52219c", UID:"ec1286d3-25c2-40de-87cc-2847b98cc155", ResourceVersion:"4029", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63734583667, loc:(*time.Location)(0x7861ce0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"612186992"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-4qdjk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0020d6440), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4qdjk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4qdjk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-4qdjk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002055a88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020aa960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002055b10)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002055b30)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002055b38), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002055b3c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583667, loc:(*time.Location)(0x7861ce0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583667, loc:(*time.Location)(0x7861ce0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583667, loc:(*time.Location)(0x7861ce0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583667, loc:(*time.Location)(0x7861ce0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.3", PodIP:"10.244.1.33", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.33"}}, StartTime:(*v1.Time)(0xc002079de0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003b67e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0003b6850)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://558c5eba7733e2e6fc5191790d9d446a0b93569043281ef5ac42b6704a9a97d4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002079e20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002079e00), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002055bbf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:152
Sep  1 19:02:06.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9527" for this suite.
Sep  1 19:02:18.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  1 19:02:19.227: INFO: namespace init-container-9527 deletion completed in 12.34870907s


• [SLOW TEST:71.713 seconds]
[k8s.io] InitContainer [NodeConformance]
test/e2e/framework/framework.go:693
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:151
... skipping 1216 lines ...
Sep  1 19:02:27.430: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41139 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-3351-crds.spec'
Sep  1 19:02:28.375: INFO: stderr: ""
Sep  1 19:02:28.375: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3351-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep  1 19:02:28.375: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41139 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-3351-crds.spec.bars'
Sep  1 19:02:29.194: INFO: stderr: ""
Sep  1 19:02:29.194: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-3351-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep  1 19:02:29.195: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41139 --kubeconfig=/root/.kube/kind-test-config explain e2e-test-crd-publish-openapi-3351-crds.spec.bars2'
Sep  1 19:02:30.074: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep  1 19:02:33.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2420" for this suite.
... skipping 57 lines ...
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep  1 19:02:08.215: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:152
Sep  1 19:02:32.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 2 lines ...
Sep  1 19:02:41.364: INFO: namespace job-6831 deletion completed in 8.924721005s


• [SLOW TEST:33.152 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:151
... skipping 728 lines ...
STEP: Wait for the deployment to be ready
Sep  1 19:02:35.014: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Sep  1 19:02:37.056: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583755, loc:(*time.Location)(0x7861ce0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583755, loc:(*time.Location)(0x7861ce0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583755, loc:(*time.Location)(0x7861ce0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63734583755, loc:(*time.Location)(0x7861ce0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep  1 19:02:40.106: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:698
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
Sep  1 19:02:50.222: INFO: Waiting for webhook configuration to be ready...
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:152
Sep  1 19:02:50.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 7 lines ...
  test/e2e/apimachinery/webhook.go:103


• [SLOW TEST:31.166 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep  1 19:03:00.133: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-35dd50c1-5b82-4c10-a4f3-4d5d578e783a
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:152
Sep  1 19:03:00.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5186" for this suite.
Sep  1 19:03:06.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  1 19:03:06.844: INFO: namespace configmap-5186 deletion completed in 6.533111663s


• [SLOW TEST:6.712 seconds]
[sig-node] ConfigMap
test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[BeforeEach] [k8s.io] Pods
  test/e2e/framework/framework.go:151
... skipping 250 lines ...
Sep  1 19:03:02.161: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep  1 19:03:02.161: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41139 --kubeconfig=/root/.kube/kind-test-config describe pod redis-master-497fk --namespace=kubectl-8743'
Sep  1 19:03:02.511: INFO: stderr: ""
Sep  1 19:03:02.511: INFO: stdout: "Name:         redis-master-497fk\nNamespace:    kubectl-8743\nPriority:     0\nNode:         kind-worker2/172.18.0.2\nStart Time:   Tue, 01 Sep 2020 19:02:57 +0000\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.2.84\nIPs:\n  IP:           10.244.2.84\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://ee3032dc18e41d80dff7026dabe920ef92ded2c67d8673911ceb091d1e5b69d3\n    Image:          docker.io/library/redis:5.0.5-alpine\n    Image ID:       docker.io/library/redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 01 Sep 2020 19:03:00 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-gtcqt (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-gtcqt:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-gtcqt\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  5s    default-scheduler      Successfully assigned kubectl-8743/redis-master-497fk to kind-worker2\n  Normal  Pulled     3s    kubelet, kind-worker2  Container image \"docker.io/library/redis:5.0.5-alpine\" already present on machine\n  Normal  Created    3s    kubelet, kind-worker2  Created container redis-master\n  Normal  Started    2s    kubelet, kind-worker2  Started container redis-master\n"
Sep  1 19:03:02.511: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41139 --kubeconfig=/root/.kube/kind-test-config describe rc redis-master --namespace=kubectl-8743'
Sep  1 19:03:03.030: INFO: stderr: ""
Sep  1 19:03:03.031: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8743\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        docker.io/library/redis:5.0.5-alpine\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: redis-master-497fk\n"
Sep  1 19:03:03.031: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41139 --kubeconfig=/root/.kube/kind-test-config describe service redis-master --namespace=kubectl-8743'
Sep  1 19:03:03.346: INFO: stderr: ""
Sep  1 19:03:03.346: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8743\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.96.116.162\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.84:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep  1 19:03:03.351: INFO: Running '/home/prow/go/src/k8s.io/kubernetes/bazel-bin/cmd/kubectl/linux_amd64_pure_stripped/kubectl --server=https://127.0.0.1:41139 --kubeconfig=/root/.kube/kind-test-config describe node kind-control-plane'
Sep  1 19:03:03.744: INFO: stderr: ""
Sep  1 19:03:03.744: INFO: stdout: "Name:               kind-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 01 Sep 2020 18:59:09 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 01 Sep 2020 19:02:50 +0000   Tue, 01 Sep 2020 18:59:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 01 Sep 2020 19:02:50 +0000   Tue, 01 Sep 2020 18:59:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 01 Sep 2020 19:02:50 +0000   Tue, 01 Sep 2020 18:59:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 01 Sep 2020 19:02:50 +0000   Tue, 01 Sep 2020 18:59:49 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.4\n  Hostname:    kind-control-plane\nCapacity:\n cpu:                8\n ephemeral-storage:  253882800Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             53582972Ki\n pods:               110\nAllocatable:\n cpu:                8\n ephemeral-storage:  253882800Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             53582972Ki\n pods:               110\nSystem Info:\n Machine ID:                 17d4f2e5283940cb95423ea37d6d1e2a\n System UUID:                3e24fb93-6b65-4143-bfe7-df0c6f5c2e01\n Boot ID:                    5c2c56dc-bef4-461c-8024-48e559fdabb2\n Kernel Version:             4.15.0-1044-gke\n OS Image:                   Ubuntu Groovy Gorilla (development branch)\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0\n Kubelet Version:            v1.16.15-rc.0.27+5ee4e161ecc8bd\n Kube-Proxy Version:         v1.16.15-rc.0.27+5ee4e161ecc8bd\nPodCIDR:                     10.244.0.0/24\nPodCIDRs:                    10.244.0.0/24\nProviderID:                  kind://docker/kind/kind-control-plane\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5644d7b6d9-ghzsf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m36s\n  kube-system                coredns-5644d7b6d9-l2xk6                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m36s\n  kube-system                etcd-kind-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s\n  kube-system                kindnet-m4rcn                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m36s\n  kube-system                kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m49s\n  kube-system                kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m52s\n  kube-system                kube-proxy-zs862                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s\n  kube-system                kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m42s\n  local-path-storage         local-path-provisioner-5f4b769cdf-bsdv9       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m36s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (10%)  100m (1%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:\n  Type    Reason                   Age                  From                            Message\n  ----    ------                   ----                 ----                            -------\n  Normal  NodeHasSufficientMemory  4m8s (x8 over 4m8s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal  NodeHasNoDiskPressure    4m8s (x7 over 4m8s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal  NodeHasSufficientPID     4m8s (x7 over 4m8s)  kubelet, kind-control-plane     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal  Starting                 3m32s                kube-proxy, kind-control-plane  Starting kube-proxy.\n  Normal  Starting                 2m36s                kube-proxy, kind-control-plane  Starting kube-proxy.\n"
... skipping 1050 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep  1 19:03:38.211: INFO: >>> kubeConfig: /root/.kube/kind-test-config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-589744e8-41b8-4011-9172-0bc169d61259
[AfterEach] [sig-api-machinery] Secrets
  test/e2e/framework/framework.go:152
Sep  1 19:03:38.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-810" for this suite.
Sep  1 19:03:46.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Sep  1 19:03:47.776: INFO: namespace secrets-810 deletion completed in 9.446190228s


• [SLOW TEST:9.565 seconds]
[sig-api-machinery] Secrets
test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:698
------------------------------
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Sep  1 19:03:40.382: INFO: >>> kubeConfig: /root/.kube/kind-test-config
... skipping 802 lines ...