This job view page is being replaced by Spyglass soon. Check out the new job view.
PRzouyee: Remove type and replace it with scheduler methods
ResultFAILURE
Tests 1 failed / 751 succeeded
Started2020-01-07 08:29
Elapsed46m42s
Revision12e613ba18d8b5a0921f3329950a11f19d5004d9
Refs 85442
job-versionv1.18.0-alpha.1.385+4dcaca9300036e
master_os_imagecos-77-12371-89-0
node_os_imagecos-77-12371-89-0
revisionv1.18.0-alpha.1.385+4dcaca9300036e

Test Failures


Test 22m52s

error during ./hack/ginkgo-e2e.sh --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/logs/artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


Show 751 Passed Tests

Show 4054 Skipped Tests

Error lines from build-log.txt

... skipping 147 lines ...
INFO: 4430 processes: 4344 remote cache hit, 27 processwrapper-sandbox, 59 remote.
INFO: Build completed successfully, 4495 total actions
INFO: Build completed successfully, 4495 total actions
make: Leaving directory '/home/prow/go/src/k8s.io/kubernetes'
2020/01/07 08:36:17 process.go:155: Step 'make -C /home/prow/go/src/k8s.io/kubernetes bazel-release' finished in 6m12.122321573s
2020/01/07 08:36:17 util.go:265: Flushing memory.
2020/01/07 08:36:29 util.go:275: flushMem error (page cache): exit status 1
2020/01/07 08:36:29 process.go:153: Running: /home/prow/go/src/k8s.io/release/push-build.sh --nomock --verbose --noupdatelatest --bucket=kubernetes-release-pull --ci --gcs-suffix=/pull-kubernetes-e2e-gce --allow-dup
push-build.sh: BEGIN main on 6a423c7e-3127-11ea-9892-0697f37dabf8 Tue Jan  7 08:36:30 UTC 2020

$TEST_TMPDIR defined: output root default is '/bazel-scratch/.cache/bazel' and max_idle_secs default is '15'.
INFO: Invocation ID: c11d38d6-a7ef-4653-8f22-7b4e4e4d5867
Loading: 
... skipping 822 lines ...
Trying to find master named 'e2e-9457ea9a2c-abe28-master'
Looking for address 'e2e-9457ea9a2c-abe28-master-ip'
Using master: e2e-9457ea9a2c-abe28-master (external IP: 34.82.73.248; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

............Kubernetes cluster created.
Cluster "k8s-jkns-gke-reboot-1-4_e2e-9457ea9a2c-abe28" set.
User "k8s-jkns-gke-reboot-1-4_e2e-9457ea9a2c-abe28" set.
Context "k8s-jkns-gke-reboot-1-4_e2e-9457ea9a2c-abe28" created.
Switched to context "k8s-jkns-gke-reboot-1-4_e2e-9457ea9a2c-abe28".
... skipping 20 lines ...
NAME                                     STATUS                     ROLES    AGE   VERSION
e2e-9457ea9a2c-abe28-master              Ready,SchedulingDisabled   <none>   30s   v1.18.0-alpha.1.385+4dcaca9300036e
e2e-9457ea9a2c-abe28-minion-group-720k   Ready                      <none>   34s   v1.18.0-alpha.1.385+4dcaca9300036e
e2e-9457ea9a2c-abe28-minion-group-dhh9   Ready                      <none>   35s   v1.18.0-alpha.1.385+4dcaca9300036e
e2e-9457ea9a2c-abe28-minion-group-vnrk   Ready                      <none>   35s   v1.18.0-alpha.1.385+4dcaca9300036e
Validate output:
NAME                 STATUS    MESSAGE             ERROR
etcd-0               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 87 lines ...

Jan  7 08:45:25.425: INFO: cluster-master-image: cos-77-12371-89-0
Jan  7 08:45:25.425: INFO: cluster-node-image: cos-77-12371-89-0
Jan  7 08:45:25.425: INFO: >>> kubeConfig: /workspace/.kube/config
Jan  7 08:45:25.429: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Jan  7 08:45:25.585: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready
Jan  7 08:45:25.769: INFO: The status of Pod fluentd-gcp-v3.2.0-jqx2l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan  7 08:45:25.769: INFO: 26 / 27 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Jan  7 08:45:25.769: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan  7 08:45:25.769: INFO: POD                       NODE                                    PHASE    GRACE  CONDITIONS
Jan  7 08:45:25.769: INFO: fluentd-gcp-v3.2.0-jqx2l  e2e-9457ea9a2c-abe28-minion-group-720k  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:44:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:20 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:20 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:44:18 +0000 UTC  }]
Jan  7 08:45:25.769: INFO: 
Jan  7 08:45:27.897: INFO: The status of Pod fluentd-gcp-v3.2.0-jqx2l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan  7 08:45:27.897: INFO: 26 / 27 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Jan  7 08:45:27.897: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan  7 08:45:27.897: INFO: POD                       NODE                                    PHASE    GRACE  CONDITIONS
Jan  7 08:45:27.897: INFO: fluentd-gcp-v3.2.0-jqx2l  e2e-9457ea9a2c-abe28-minion-group-720k  Running  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:44:19 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:20 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:20 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:44:18 +0000 UTC  }]
Jan  7 08:45:27.898: INFO: 
Jan  7 08:45:29.898: INFO: The status of Pod fluentd-gcp-v3.2.0-vh2h2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jan  7 08:45:29.898: INFO: 26 / 27 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Jan  7 08:45:29.898: INFO: expected 8 pod replicas in namespace 'kube-system', 8 are Running and Ready.
Jan  7 08:45:29.898: INFO: POD                       NODE                                    PHASE    GRACE  CONDITIONS
Jan  7 08:45:29.898: INFO: fluentd-gcp-v3.2.0-vh2h2  e2e-9457ea9a2c-abe28-minion-group-720k  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:28 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:28 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:28 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-07 08:45:28 +0000 UTC  }]
Jan  7 08:45:29.898: INFO: 
Jan  7 08:45:31.897: INFO: 27 / 27 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
... skipping 1398 lines ...
  test/e2e/framework/framework.go:175
Jan  7 08:45:35.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2251" for this suite.

•
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota","total":-1,"completed":1,"skipped":4,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ext3)] volumes
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:35.193: INFO: Driver csi-hostpath doesn't support ext3 -- skipping
... skipping 76 lines ...
• [SLOW TEST:13.666 seconds]
[sig-node] ConfigMap
test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:45.868: INFO: Driver local doesn't support DynamicPV -- skipping
[AfterEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/framework/framework.go:175
Jan  7 08:45:45.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 96 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl copy
  test/e2e/kubectl/kubectl.go:1421
    should copy a file from a running Pod
    test/e2e/kubectl/kubectl.go:1440
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod","total":-1,"completed":1,"skipped":4,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:49.968: INFO: Only supported for providers [azure] (not gce)
[AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan  7 08:45:49.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 101 lines ...
• [SLOW TEST:18.045 seconds]
[sig-storage] Projected configMap
test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:50.274: INFO: Driver supports dynamic provisioning, skipping InlineVolume pattern
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan  7 08:45:50.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 68 lines ...
test/e2e/framework/framework.go:680
  When creating a pod with privileged
  test/e2e/common/security_context.go:226
    should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
    test/e2e/common/security_context.go:276
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]","total":-1,"completed":1,"skipped":28,"failed":0}

SS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:52.778: INFO: Driver csi-hostpath doesn't support ntfs -- skipping
... skipping 56 lines ...
• [SLOW TEST:21.041 seconds]
[k8s.io] [sig-node] Security Context
test/e2e/framework/framework.go:680
  should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
  test/e2e/node/security_context.go:76
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]","total":-1,"completed":1,"skipped":3,"failed":0}

S
------------------------------
[BeforeEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 33 lines ...
• [SLOW TEST:21.670 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:94
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
... skipping 75 lines ...
• [SLOW TEST:8.878 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":10,"failed":0}
[BeforeEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:54.755: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode
  test/e2e/framework/framework.go:175
Jan  7 08:45:54.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 27 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: aws]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:93
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [BeforeEach]
      test/e2e/storage/testsuites/topology.go:190

      Only supported for providers [aws] (not gce)

      test/e2e/storage/drivers/in_tree.go:1575
------------------------------
... skipping 66 lines ...
• [SLOW TEST:23.218 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0}

S
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:55.508: INFO: Driver local doesn't support DynamicPV -- skipping
... skipping 26 lines ...
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan  7 08:45:53.708: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename topology
STEP: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in topology-529
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to schedule a pod which has topologies that conflict with AllowedTopologies
  test/e2e/storage/testsuites/topology.go:190
Jan  7 08:45:54.158: INFO: found topology map[failure-domain.beta.kubernetes.io/zone:us-west1-b]
Jan  7 08:45:54.280: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan  7 08:45:56.496: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Jan  7 08:45:58.098: INFO: Not enough topologies in cluster -- skipping
STEP: Deleting pvc
... skipping 9 lines ...
[sig-storage] In-tree Volumes
test/e2e/storage/utils/framework.go:23
  [Driver: gcepd]
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Dynamic PV (delayed binding)] topology
    test/e2e/storage/testsuites/base.go:93
      should fail to schedule a pod which has topologies that conflict with AllowedTopologies [It]
      test/e2e/storage/testsuites/topology.go:190

      Not enough topologies in cluster -- skipping

      test/e2e/storage/testsuites/topology.go:197
------------------------------
... skipping 234 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:93
      should support existing directory
      test/e2e/storage/testsuites/subpath.go:202
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory","total":-1,"completed":1,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:45:59.162: INFO: Driver local doesn't support InlineVolume -- skipping
[AfterEach] [Testpattern: Inline-volume (default fs)] subPath
  test/e2e/framework/framework.go:175
Jan  7 08:45:59.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 60 lines ...
• [SLOW TEST:29.028 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks succeed
  test/e2e/apps/job.go:45
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks succeed","total":-1,"completed":1,"skipped":1,"failed":0}

S
------------------------------
[BeforeEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 29 lines ...
• [SLOW TEST:11.491 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":2,"skipped":4,"failed":0}

SSSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:46:01.778: INFO: Driver local doesn't support ntfs -- skipping
... skipping 32 lines ...
  test/e2e/framework/framework.go:175
Jan  7 08:46:02.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3807" for this suite.

•
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":-1,"completed":2,"skipped":2,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:46:02.301: INFO: Only supported for providers [openstack] (not gce)
[AfterEach] [Testpattern: Dynamic PV (immediate binding)] topology
  test/e2e/framework/framework.go:175
Jan  7 08:46:02.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
... skipping 94 lines ...
  test/e2e/storage/in_tree_volumes.go:70
    [Testpattern: Inline-volume (default fs)] subPath
    test/e2e/storage/testsuites/base.go:93
      should support readOnly file specified in the volumeMount [LinuxOnly]
      test/e2e/storage/testsuites/subpath.go:376
------------------------------
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]","total":-1,"completed":1,"skipped":8,"failed":0}

SSSSS
------------------------------
[BeforeEach] [Testpattern: Pre-provisioned PV (default fs)] subPath
  test/e2e/storage/testsuites/base.go:94
Jan  7 08:46:03.194: INFO: Driver hostPathSymlink doesn't support PreprovisionedPV -- skipping
... skipping 50 lines ...
  test/e2e/framework/framework.go:175
Jan  7 08:46:03.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2852" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC","total":-1,"completed":3,"skipped":12,"failed":0}

SS
------------------------------
[BeforeEach] [sig-cli] Kubectl alpha client
  test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
... skipping 20 lines ...
  test/e2e/framework/framework.go:175
Jan  7 08:46:04.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1192" for this suite.

•
------------------------------
{"msg":"PASSED [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob","total":-1,"completed":2,"skipped":15,"failed":0}
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/base.go:94
[BeforeEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
  test/e2e/storage/testsuites/volume_expand.go:92
Jan  7 08:46:04.263: INFO: Driver "nfs" does not support volume expansion - skipping
[AfterEach] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand
... skipping 192 lines ...
  test/e2e/kubectl/portforward.go:442
    that expects a client request
    test/e2e/kubectl/portforward.go:443
      should support a client that connects, sends DATA, and disconnects
      test/e2e/kubectl/portfor