This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-16 11:09
Elapsed2h33m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 348 lines ...
Trying to find master named 'kt2-5be7f4b0-16de-master'
Looking for address 'kt2-5be7f4b0-16de-master-ip'
Using master: kt2-5be7f4b0-16de-master (external IP: 35.222.34.167; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

................Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-038_kt2-5be7f4b0-16de" set.
User "k8s-infra-e2e-boskos-038_kt2-5be7f4b0-16de" set.
Context "k8s-infra-e2e-boskos-038_kt2-5be7f4b0-16de" created.
Switched to context "k8s-infra-e2e-boskos-038_kt2-5be7f4b0-16de".
... skipping 26 lines ...
kt2-5be7f4b0-16de-minion-group-2z4b   Ready                      <none>   18s   v1.23.0-alpha.2.40+bea2e462a5b8c2
kt2-5be7f4b0-16de-minion-group-j2m1   Ready                      <none>   18s   v1.23.0-alpha.2.40+bea2e462a5b8c2
kt2-5be7f4b0-16de-minion-group-lhnl   Ready                      <none>   17s   v1.23.0-alpha.2.40+bea2e462a5b8c2
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
etcd-1               Healthy   {"health":"true","reason":""}   
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 40 lines ...
Specify --start=53074 in the next get-serial-port-output invocation to get only the new output starting from here.
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/cluster-logs'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-5be7f4b0-16de-minion-group-lhnl
... skipping 8 lines ...
load pubkey "/root/.ssh/google_compute_engine": invalid format
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=kt2-5be7f4b0-16de-minion-group
NODE_NAMES=kt2-5be7f4b0-16de-minion-group-2z4b kt2-5be7f4b0-16de-minion-group-j2m1 kt2-5be7f4b0-16de-minion-group-lhnl
Failures for kt2-5be7f4b0-16de-minion-group (if any):
I0916 11:36:57.911461    2874 dumplogs.go:121] About to run: [/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl cluster-info dump]
I0916 11:36:57.911497    2874 local.go:42] ⚙️ /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl cluster-info dump
I0916 11:36:59.155803    2874 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-ginkgo ; --focus-regex=\[Conformance\] ; --use-built-binaries
I0916 11:36:59.246566   96824 ginkgo.go:120] Using kubeconfig at /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
I0916 11:36:59.246832   96824 ginkgo.go:90] Running ginkgo test as /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/ginkgo [--nodes=1 /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/e2e.test -- --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --kubectl-path=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --ginkgo.flakeAttempts=1 --ginkgo.skip= --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d]
Sep 16 11:36:59.330: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0916 11:36:59.330501   96838 e2e.go:127] Starting e2e run "b403288e-7c8d-44f9-a5a1-8d0cdbfce5f9" on Ginkgo node 1
{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1631792219 - Will randomize all specs
Will run 346 of 6852 specs

Sep 16 11:37:01.329: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
Sep 16 11:37:01.332: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Sep 16 11:37:01.351: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep 16 11:37:01.395: INFO: The status of Pod l7-default-backend-79858d8f86-8fmjs is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 11:37:01.395: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-wcmwd is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 11:37:01.395: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep 16 11:37:01.395: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 11:37:01.395: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 11:37:01.395: INFO: l7-default-backend-79858d8f86-8fmjs     kt2-5be7f4b0-16de-minion-group-j2m1  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:27 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:27 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:27 +0000 UTC  }]
Sep 16 11:37:01.395: INFO: metrics-server-v0.5.0-6554f5dbd8-wcmwd  kt2-5be7f4b0-16de-minion-group-2z4b  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:52 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:52 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:52 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 11:35:52 +0000 UTC  }]
Sep 16 11:37:01.396: INFO: 
... skipping 40 lines ...
• [SLOW TEST:6.226 seconds]
[sig-api-machinery] Namespaces [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":1,"skipped":59,"failed":0}
SSSSSSS
------------------------------
[sig-node] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 13 lines ...
Sep 16 11:37:13.748: INFO: The status of Pod pod-update-activedeadlineseconds-66be8120-01f6-4f0a-ad67-f5c94c954514 is Running (Ready = true)
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 16 11:37:14.265: INFO: Successfully updated pod "pod-update-activedeadlineseconds-66be8120-01f6-4f0a-ad67-f5c94c954514"
Sep 16 11:37:14.265: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-66be8120-01f6-4f0a-ad67-f5c94c954514" in namespace "pods-512" to be "terminated due to deadline exceeded"
Sep 16 11:37:14.268: INFO: Pod "pod-update-activedeadlineseconds-66be8120-01f6-4f0a-ad67-f5c94c954514": Phase="Running", Reason="", readiness=true. Elapsed: 3.430892ms
Sep 16 11:37:16.272: INFO: Pod "pod-update-activedeadlineseconds-66be8120-01f6-4f0a-ad67-f5c94c954514": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 2.007846086s
Sep 16 11:37:16.273: INFO: Pod "pod-update-activedeadlineseconds-66be8120-01f6-4f0a-ad67-f5c94c954514" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:37:16.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-512" for this suite.

• [SLOW TEST:6.605 seconds]
[sig-node] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":2,"skipped":66,"failed":0}
SSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 30 lines ...
• [SLOW TEST:18.437 seconds]
[sig-api-machinery] Aggregator
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":3,"skipped":69,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 11:37:34.718: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 16 11:37:34.758: INFO: Waiting up to 5m0s for pod "pod-52f20136-8b4d-43ad-b8d2-cac88ca09136" in namespace "emptydir-8938" to be "Succeeded or Failed"
Sep 16 11:37:34.767: INFO: Pod "pod-52f20136-8b4d-43ad-b8d2-cac88ca09136": Phase="Pending", Reason="", readiness=false. Elapsed: 8.340657ms
Sep 16 11:37:36.771: INFO: Pod "pod-52f20136-8b4d-43ad-b8d2-cac88ca09136": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012243823s
Sep 16 11:37:38.775: INFO: Pod "pod-52f20136-8b4d-43ad-b8d2-cac88ca09136": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016299166s
Sep 16 11:37:40.780: INFO: Pod "pod-52f20136-8b4d-43ad-b8d2-cac88ca09136": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021371528s
STEP: Saw pod success
Sep 16 11:37:40.780: INFO: Pod "pod-52f20136-8b4d-43ad-b8d2-cac88ca09136" satisfied condition "Succeeded or Failed"
Sep 16 11:37:40.782: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-52f20136-8b4d-43ad-b8d2-cac88ca09136 container test-container: <nil>
STEP: delete the pod
Sep 16 11:37:40.800: INFO: Waiting for pod pod-52f20136-8b4d-43ad-b8d2-cac88ca09136 to disappear
Sep 16 11:37:40.803: INFO: Pod pod-52f20136-8b4d-43ad-b8d2-cac88ca09136 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 3 lines ...
• [SLOW TEST:6.093 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":4,"skipped":76,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 19 lines ...
• [SLOW TEST:30.241 seconds]
[sig-network] EndpointSlice
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":5,"skipped":102,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods Extended Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [sig-node] Pods Extended
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:38:11.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2418" for this suite.
•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":6,"skipped":118,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 97 lines ...
• [SLOW TEST:46.754 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":7,"skipped":139,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 51 lines ...
• [SLOW TEST:23.234 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":8,"skipped":159,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
Sep 16 11:39:23.193: INFO: The status of Pod labelsupdate23c9bda2-8b01-4c46-9370-532f957e23d0 is Running (Ready = true)
Sep 16 11:39:23.730: INFO: Successfully updated pod "labelsupdate23c9bda2-8b01-4c46-9370-532f957e23d0"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:39:25.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9131" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":9,"skipped":231,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
Sep 16 11:39:25.837: INFO: The status of Pod pod-exec-websocket-1adee735-9692-4904-9e9f-3f14cf6beac9 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 11:39:27.842: INFO: The status of Pod pod-exec-websocket-1adee735-9692-4904-9e9f-3f14cf6beac9 is Running (Ready = true)
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:39:28.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6832" for this suite.
•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":10,"skipped":241,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 34 lines ...
• [SLOW TEST:13.932 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":11,"skipped":256,"failed":0}
SSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Sep 16 11:39:42.035: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:39:45.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2249" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":12,"skipped":263,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 11:39:45.826: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bf0d2c9d-1393-47ca-b50f-4a5c800c8c28" in namespace "downward-api-6120" to be "Succeeded or Failed"
Sep 16 11:39:45.830: INFO: Pod "downwardapi-volume-bf0d2c9d-1393-47ca-b50f-4a5c800c8c28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023561ms
Sep 16 11:39:47.838: INFO: Pod "downwardapi-volume-bf0d2c9d-1393-47ca-b50f-4a5c800c8c28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011859214s
STEP: Saw pod success
Sep 16 11:39:47.838: INFO: Pod "downwardapi-volume-bf0d2c9d-1393-47ca-b50f-4a5c800c8c28" satisfied condition "Succeeded or Failed"
Sep 16 11:39:47.841: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod downwardapi-volume-bf0d2c9d-1393-47ca-b50f-4a5c800c8c28 container client-container: <nil>
STEP: delete the pod
Sep 16 11:39:47.918: INFO: Waiting for pod downwardapi-volume-bf0d2c9d-1393-47ca-b50f-4a5c800c8c28 to disappear
Sep 16 11:39:47.923: INFO: Pod downwardapi-volume-bf0d2c9d-1393-47ca-b50f-4a5c800c8c28 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:39:47.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6120" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":13,"skipped":276,"failed":0}

------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should validate Statefulset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 44 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should validate Statefulset Status endpoints [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":14,"skipped":276,"failed":0}
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should guarantee kube-root-ca.crt exist in any namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: waiting for the root ca configmap reconciled
Sep 16 11:40:09.403: INFO: Reconciled root ca configmap in namespace "svcaccounts-5236"
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:40:09.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5236" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":15,"skipped":285,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 27 lines ...
• [SLOW TEST:92.731 seconds]
[sig-node] NoExecuteTaintManager Multiple Pods [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":16,"skipped":309,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should list and delete a collection of ReplicaSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 21 lines ...
• [SLOW TEST:7.254 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":17,"skipped":316,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 23 lines ...
• [SLOW TEST:12.396 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":18,"skipped":321,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:11.261 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":19,"skipped":358,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 27 lines ...
• [SLOW TEST:74.481 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":20,"skipped":377,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 14 lines ...
Sep 16 11:43:31.313: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:43:43.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9882" for this suite.
STEP: Destroying namespace "webhook-9882-markers" for this suite.
... skipping 3 lines ...
• [SLOW TEST:16.437 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":21,"skipped":383,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 23 lines ...
• [SLOW TEST:13.241 seconds]
[sig-api-machinery] Namespaces [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":22,"skipped":426,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-e4e7aa55-1d50-4c74-82b8-2b17f0396ad0
STEP: Creating a pod to test consume configMaps
Sep 16 11:43:57.277: INFO: Waiting up to 5m0s for pod "pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc" in namespace "configmap-9901" to be "Succeeded or Failed"
Sep 16 11:43:57.283: INFO: Pod "pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.006847ms
Sep 16 11:43:59.287: INFO: Pod "pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010117709s
Sep 16 11:44:01.293: INFO: Pod "pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01604058s
STEP: Saw pod success
Sep 16 11:44:01.293: INFO: Pod "pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc" satisfied condition "Succeeded or Failed"
Sep 16 11:44:01.297: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc container agnhost-container: <nil>
STEP: delete the pod
Sep 16 11:44:01.348: INFO: Waiting for pod pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc to disappear
Sep 16 11:44:01.353: INFO: Pod pod-configmaps-d38ce62a-adf7-48f7-8105-497717930efc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:01.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9901" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":23,"skipped":460,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should support creating EndpointSlice API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 24 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:01.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-9955" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":24,"skipped":482,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 16 11:44:01.537: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 16 11:44:01.593: INFO: Waiting up to 5m0s for pod "downward-api-8b60d6f5-e108-4a36-b1c0-8b3621fab490" in namespace "downward-api-8092" to be "Succeeded or Failed"
Sep 16 11:44:01.598: INFO: Pod "downward-api-8b60d6f5-e108-4a36-b1c0-8b3621fab490": Phase="Pending", Reason="", readiness=false. Elapsed: 5.779367ms
Sep 16 11:44:03.604: INFO: Pod "downward-api-8b60d6f5-e108-4a36-b1c0-8b3621fab490": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011509417s
STEP: Saw pod success
Sep 16 11:44:03.604: INFO: Pod "downward-api-8b60d6f5-e108-4a36-b1c0-8b3621fab490" satisfied condition "Succeeded or Failed"
Sep 16 11:44:03.608: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downward-api-8b60d6f5-e108-4a36-b1c0-8b3621fab490 container dapi-container: <nil>
STEP: delete the pod
Sep 16 11:44:03.640: INFO: Waiting for pod downward-api-8b60d6f5-e108-4a36-b1c0-8b3621fab490 to disappear
Sep 16 11:44:03.648: INFO: Pod downward-api-8b60d6f5-e108-4a36-b1c0-8b3621fab490 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:03.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8092" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":25,"skipped":577,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:07.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8238" for this suite.
STEP: Destroying namespace "webhook-8238-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":26,"skipped":590,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 11:44:07.972: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 16 11:44:08.209: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:10.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8061" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":27,"skipped":605,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 86 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-8x64q webserver-deployment-795d758f88- deployment-3702  cc80dde8-6f35-4099-b817-b22bfce38f30 3245 0 2021-09-16 11:44:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 02dfe026-f6b2-47c2-af61-e2c75b57975a 0xc004cc63b0 0xc004cc63b1}] []  [{kube-controller-manager Update v1 2021-09-16 11:44:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02dfe026-f6b2-47c2-af61-e2c75b57975a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 11:44:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xtlbt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtlbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-lhnl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:,StartTime:2021-09-16 11:44:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 16 11:44:18.947: INFO: Pod "webserver-deployment-795d758f88-czcch" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-czcch webserver-deployment-795d758f88- deployment-3702  539ef9bf-b4bf-4341-b33e-f43867d6f5c2 3250 0 2021-09-16 11:44:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 02dfe026-f6b2-47c2-af61-e2c75b57975a 0xc004cc6580 0xc004cc6581}] []  [{kube-controller-manager Update v1 2021-09-16 11:44:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02dfe026-f6b2-47c2-af61-e2c75b57975a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 11:44:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dtrg2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dtrg2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-j2m1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:,StartTime:2021-09-16 11:44:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 16 11:44:18.948: INFO: Pod "webserver-deployment-795d758f88-dsccv" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-dsccv webserver-deployment-795d758f88- deployment-3702  225688f4-07e1-46bd-8584-202beff7afca 3168 0 2021-09-16 11:44:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 02dfe026-f6b2-47c2-af61-e2c75b57975a 0xc004cc6750 0xc004cc6751}] []  [{kube-controller-manager Update v1 2021-09-16 11:44:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02dfe026-f6b2-47c2-af61-e2c75b57975a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 11:44:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hg2wg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hg2wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-2z4b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-16 11:44:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 16 11:44:18.948: INFO: Pod "webserver-deployment-795d758f88-g27kl" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-g27kl webserver-deployment-795d758f88- deployment-3702  7c7a8f15-ce0c-4c62-a101-3b0b66580d17 3257 0 2021-09-16 11:44:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 02dfe026-f6b2-47c2-af61-e2c75b57975a 0xc004cc6920 0xc004cc6921}] []  [{kube-controller-manager Update v1 2021-09-16 11:44:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02dfe026-f6b2-47c2-af61-e2c75b57975a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 11:44:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.19\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9sfs6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9sfs6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-lhnl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:10.64.3.19,StartTime:2021-09-16 11:44:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.19,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 16 11:44:18.948: INFO: Pod "webserver-deployment-795d758f88-gvqqg" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-gvqqg webserver-deployment-795d758f88- deployment-3702  2efdb672-e517-4a27-b3df-b17f6bd0f018 3246 0 2021-09-16 11:44:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 02dfe026-f6b2-47c2-af61-e2c75b57975a 0xc004cc6b20 0xc004cc6b21}] []  [{kube-controller-manager Update v1 2021-09-16 11:44:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02dfe026-f6b2-47c2-af61-e2c75b57975a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 11:44:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sfhj8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sfhj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-j2m1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:,StartTime:2021-09-16 11:44:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 16 11:44:18.948: INFO: Pod "webserver-deployment-795d758f88-h5gtn" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-h5gtn webserver-deployment-795d758f88- deployment-3702  eca5154a-1137-409a-abd3-788510f40518 3248 0 2021-09-16 11:44:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 02dfe026-f6b2-47c2-af61-e2c75b57975a 0xc004cc6cf0 0xc004cc6cf1}] []  [{kube-controller-manager Update v1 2021-09-16 11:44:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02dfe026-f6b2-47c2-af61-e2c75b57975a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 11:44:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rww65,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rww65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-2z4b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-16 11:44:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 16 11:44:18.948: INFO: Pod "webserver-deployment-795d758f88-lf8td" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-lf8td webserver-deployment-795d758f88- deployment-3702  98c4b01b-fa0c-4a02-ad67-0fb669d7efba 3232 0 2021-09-16 11:44:16 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 02dfe026-f6b2-47c2-af61-e2c75b57975a 0xc004cc6ec0 0xc004cc6ec1}] []  [{kube-controller-manager Update v1 2021-09-16 11:44:16 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"02dfe026-f6b2-47c2-af61-e2c75b57975a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 11:44:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h876h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h876h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-2z4b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:16 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 11:44:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-16 11:44:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 11 lines ...
• [SLOW TEST:8.399 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":28,"skipped":635,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 11:44:18.961: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 16 11:44:19.051: INFO: Waiting up to 5m0s for pod "pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0" in namespace "emptydir-5206" to be "Succeeded or Failed"
Sep 16 11:44:19.057: INFO: Pod "pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.505254ms
Sep 16 11:44:21.061: INFO: Pod "pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00949653s
Sep 16 11:44:23.065: INFO: Pod "pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01416146s
Sep 16 11:44:25.081: INFO: Pod "pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029894174s
STEP: Saw pod success
Sep 16 11:44:25.081: INFO: Pod "pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0" satisfied condition "Succeeded or Failed"
Sep 16 11:44:25.090: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0 container test-container: <nil>
STEP: delete the pod
Sep 16 11:44:25.133: INFO: Waiting for pod pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0 to disappear
Sep 16 11:44:25.142: INFO: Pod pod-bc3eaf5e-1d49-463f-8745-45445a8d71f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 3 lines ...
• [SLOW TEST:6.206 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":29,"skipped":645,"failed":0}
SSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events API
... skipping 20 lines ...
STEP: listing events in all namespaces
STEP: listing events in test namespace
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:25.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-153" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":30,"skipped":654,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:25.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4108" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":31,"skipped":703,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 11 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:29.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4234" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":32,"skipped":716,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-fa553582-55db-44d0-bd24-27c18cf9f59f
STEP: Creating a pod to test consume secrets
Sep 16 11:44:29.975: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-28eb6f31-10b1-42bd-9ab8-325ec157bcc3" in namespace "projected-5251" to be "Succeeded or Failed"
Sep 16 11:44:29.992: INFO: Pod "pod-projected-secrets-28eb6f31-10b1-42bd-9ab8-325ec157bcc3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.723499ms
Sep 16 11:44:31.995: INFO: Pod "pod-projected-secrets-28eb6f31-10b1-42bd-9ab8-325ec157bcc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02015091s
STEP: Saw pod success
Sep 16 11:44:31.995: INFO: Pod "pod-projected-secrets-28eb6f31-10b1-42bd-9ab8-325ec157bcc3" satisfied condition "Succeeded or Failed"
Sep 16 11:44:31.998: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-secrets-28eb6f31-10b1-42bd-9ab8-325ec157bcc3 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 11:44:32.020: INFO: Waiting for pod pod-projected-secrets-28eb6f31-10b1-42bd-9ab8-325ec157bcc3 to disappear
Sep 16 11:44:32.024: INFO: Pod pod-projected-secrets-28eb6f31-10b1-42bd-9ab8-325ec157bcc3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:32.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5251" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":33,"skipped":756,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 16 11:44:32.034: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 16 11:44:32.114: INFO: Waiting up to 5m0s for pod "var-expansion-107e514b-bb91-498c-bf22-b95d74481eb8" in namespace "var-expansion-3345" to be "Succeeded or Failed"
Sep 16 11:44:32.123: INFO: Pod "var-expansion-107e514b-bb91-498c-bf22-b95d74481eb8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.033585ms
Sep 16 11:44:34.127: INFO: Pod "var-expansion-107e514b-bb91-498c-bf22-b95d74481eb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013139225s
STEP: Saw pod success
Sep 16 11:44:34.127: INFO: Pod "var-expansion-107e514b-bb91-498c-bf22-b95d74481eb8" satisfied condition "Succeeded or Failed"
Sep 16 11:44:34.131: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod var-expansion-107e514b-bb91-498c-bf22-b95d74481eb8 container dapi-container: <nil>
STEP: delete the pod
Sep 16 11:44:34.153: INFO: Waiting for pod var-expansion-107e514b-bb91-498c-bf22-b95d74481eb8 to disappear
Sep 16 11:44:34.158: INFO: Pod var-expansion-107e514b-bb91-498c-bf22-b95d74481eb8 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:34.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3345" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":34,"skipped":772,"failed":0}
S
------------------------------
[sig-instrumentation] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-instrumentation] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:34.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9886" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":35,"skipped":773,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
Sep 16 11:44:36.332: INFO: The status of Pod annotationupdatee3ee4d26-a148-4213-9c1e-bc160719b5b1 is Running (Ready = true)
Sep 16 11:44:36.856: INFO: Successfully updated pod "annotationupdatee3ee4d26-a148-4213-9c1e-bc160719b5b1"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:38.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1610" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":36,"skipped":780,"failed":0}
SSSSSS
------------------------------
[sig-node] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 11:44:38.941: INFO: Waiting up to 5m0s for pod "busybox-user-65534-784b8ae3-e90e-48c5-be71-f03e428ed664" in namespace "security-context-test-4738" to be "Succeeded or Failed"
Sep 16 11:44:38.944: INFO: Pod "busybox-user-65534-784b8ae3-e90e-48c5-be71-f03e428ed664": Phase="Pending", Reason="", readiness=false. Elapsed: 3.136603ms
Sep 16 11:44:40.949: INFO: Pod "busybox-user-65534-784b8ae3-e90e-48c5-be71-f03e428ed664": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007742333s
Sep 16 11:44:40.949: INFO: Pod "busybox-user-65534-784b8ae3-e90e-48c5-be71-f03e428ed664" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:44:40.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4738" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":37,"skipped":786,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
• [SLOW TEST:16.196 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":38,"skipped":837,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:6.387 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":39,"skipped":891,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-587c8b2f-18ec-4fce-a98d-fca512ed1ab8
STEP: Creating a pod to test consume secrets
Sep 16 11:45:03.757: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1fb2a375-de09-4af8-9ec6-4d6974e07e18" in namespace "projected-6522" to be "Succeeded or Failed"
Sep 16 11:45:03.766: INFO: Pod "pod-projected-secrets-1fb2a375-de09-4af8-9ec6-4d6974e07e18": Phase="Pending", Reason="", readiness=false. Elapsed: 9.056267ms
Sep 16 11:45:05.771: INFO: Pod "pod-projected-secrets-1fb2a375-de09-4af8-9ec6-4d6974e07e18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01395491s
STEP: Saw pod success
Sep 16 11:45:05.771: INFO: Pod "pod-projected-secrets-1fb2a375-de09-4af8-9ec6-4d6974e07e18" satisfied condition "Succeeded or Failed"
Sep 16 11:45:05.774: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-secrets-1fb2a375-de09-4af8-9ec6-4d6974e07e18 container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 11:45:05.803: INFO: Waiting for pod pod-projected-secrets-1fb2a375-de09-4af8-9ec6-4d6974e07e18 to disappear
Sep 16 11:45:05.806: INFO: Pod pod-projected-secrets-1fb2a375-de09-4af8-9ec6-4d6974e07e18 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:05.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6522" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":40,"skipped":892,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 16 11:45:05.816: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 16 11:45:05.878: INFO: Waiting up to 5m0s for pod "client-containers-aff6b21d-869f-4c19-8370-4c488671dcd2" in namespace "containers-9795" to be "Succeeded or Failed"
Sep 16 11:45:05.883: INFO: Pod "client-containers-aff6b21d-869f-4c19-8370-4c488671dcd2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.369941ms
Sep 16 11:45:07.888: INFO: Pod "client-containers-aff6b21d-869f-4c19-8370-4c488671dcd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010039625s
STEP: Saw pod success
Sep 16 11:45:07.888: INFO: Pod "client-containers-aff6b21d-869f-4c19-8370-4c488671dcd2" satisfied condition "Succeeded or Failed"
Sep 16 11:45:07.892: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod client-containers-aff6b21d-869f-4c19-8370-4c488671dcd2 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 11:45:07.918: INFO: Waiting for pod client-containers-aff6b21d-869f-4c19-8370-4c488671dcd2 to disappear
Sep 16 11:45:07.922: INFO: Pod client-containers-aff6b21d-869f-4c19-8370-4c488671dcd2 no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:07.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9795" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":41,"skipped":908,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 50 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":42,"skipped":931,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-4bfcf827-11df-4834-928f-111d7fc18b03
STEP: Creating a pod to test consume secrets
Sep 16 11:45:31.213: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e43e9587-f8c9-4a59-8f86-7ff260ab1927" in namespace "projected-5329" to be "Succeeded or Failed"
Sep 16 11:45:31.220: INFO: Pod "pod-projected-secrets-e43e9587-f8c9-4a59-8f86-7ff260ab1927": Phase="Pending", Reason="", readiness=false. Elapsed: 6.826223ms
Sep 16 11:45:33.225: INFO: Pod "pod-projected-secrets-e43e9587-f8c9-4a59-8f86-7ff260ab1927": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012370449s
STEP: Saw pod success
Sep 16 11:45:33.226: INFO: Pod "pod-projected-secrets-e43e9587-f8c9-4a59-8f86-7ff260ab1927" satisfied condition "Succeeded or Failed"
Sep 16 11:45:33.229: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-secrets-e43e9587-f8c9-4a59-8f86-7ff260ab1927 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 11:45:33.257: INFO: Waiting for pod pod-projected-secrets-e43e9587-f8c9-4a59-8f86-7ff260ab1927 to disappear
Sep 16 11:45:33.261: INFO: Pod pod-projected-secrets-e43e9587-f8c9-4a59-8f86-7ff260ab1927 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:33.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5329" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":43,"skipped":932,"failed":0}
SSSSSS
------------------------------
[sig-node] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 11:45:33.399: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7e137b8f-aa3c-42f7-929f-6cacbc5dd651" in namespace "security-context-test-7216" to be "Succeeded or Failed"
Sep 16 11:45:33.417: INFO: Pod "alpine-nnp-false-7e137b8f-aa3c-42f7-929f-6cacbc5dd651": Phase="Pending", Reason="", readiness=false. Elapsed: 18.289119ms
Sep 16 11:45:35.421: INFO: Pod "alpine-nnp-false-7e137b8f-aa3c-42f7-929f-6cacbc5dd651": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.022639434s
Sep 16 11:45:35.422: INFO: Pod "alpine-nnp-false-7e137b8f-aa3c-42f7-929f-6cacbc5dd651" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:35.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7216" for this suite.
•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":44,"skipped":938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should support CronJob API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 23 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:35.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-4571" for this suite.
•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":45,"skipped":968,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Ingress API 
  should support creating Ingress API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Ingress API
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Ingress API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:35.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-7318" for this suite.
•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":46,"skipped":984,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-63169969-f1a3-4ba3-b287-4d3e2093ba41
STEP: Creating a pod to test consume configMaps
Sep 16 11:45:35.920: INFO: Waiting up to 5m0s for pod "pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b" in namespace "configmap-2125" to be "Succeeded or Failed"
Sep 16 11:45:35.927: INFO: Pod "pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.847213ms
Sep 16 11:45:37.932: INFO: Pod "pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b": Phase="Running", Reason="", readiness=true. Elapsed: 2.0120268s
Sep 16 11:45:39.936: INFO: Pod "pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016501033s
STEP: Saw pod success
Sep 16 11:45:39.936: INFO: Pod "pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b" satisfied condition "Succeeded or Failed"
Sep 16 11:45:39.941: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b container agnhost-container: <nil>
STEP: delete the pod
Sep 16 11:45:39.965: INFO: Waiting for pod pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b to disappear
Sep 16 11:45:39.974: INFO: Pod pod-configmaps-7225c33d-a69c-4281-aa24-6767e1b9eb6b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:39.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2125" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":47,"skipped":985,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 20 lines ...
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7903 to expose endpoints map[pod1:[100] pod2:[101]]
Sep 16 11:45:44.160: INFO: successfully validated that service multi-endpoint-test in namespace services-7903 exposes endpoints map[pod1:[100] pod2:[101]]
STEP: Checking if the Service forwards traffic to pods
Sep 16 11:45:44.160: INFO: Creating new exec pod
Sep 16 11:45:47.183: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-7903 exec execpodtxc9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Sep 16 11:45:48.445: INFO: rc: 1
Sep 16 11:45:48.445: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-7903 exec execpodtxc9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 11:45:49.446: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-7903 exec execpodtxc9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Sep 16 11:45:50.686: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n"
Sep 16 11:45:50.686: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 16 11:45:50.686: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-7903 exec execpodtxc9r -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.31.82 80'
... skipping 21 lines ...
• [SLOW TEST:12.017 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":346,"completed":48,"skipped":993,"failed":0}
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:45:52.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2404" for this suite.
STEP: Destroying namespace "nspatchtest-133d35af-efe1-41be-a18c-3506fdb32ed5-1202" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":49,"skipped":996,"failed":0}
SSSSS
------------------------------
[sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 27 lines ...
• [SLOW TEST:135.420 seconds]
[sig-node] NoExecuteTaintManager Single Pod [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":50,"skipped":1001,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should validate Replicaset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 40 lines ...
• [SLOW TEST:5.190 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Replicaset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":51,"skipped":1024,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 16 11:48:12.759: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 16 11:48:12.848: INFO: Waiting up to 5m0s for pod "downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8" in namespace "downward-api-8375" to be "Succeeded or Failed"
Sep 16 11:48:12.858: INFO: Pod "downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.575536ms
Sep 16 11:48:14.862: INFO: Pod "downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014686742s
Sep 16 11:48:16.867: INFO: Pod "downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019367311s
STEP: Saw pod success
Sep 16 11:48:16.867: INFO: Pod "downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8" satisfied condition "Succeeded or Failed"
Sep 16 11:48:16.870: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8 container dapi-container: <nil>
STEP: delete the pod
Sep 16 11:48:16.920: INFO: Waiting for pod downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8 to disappear
Sep 16 11:48:16.925: INFO: Pod downward-api-558dbe47-373b-454a-bb23-73a1c7e83fa8 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:48:16.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8375" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":52,"skipped":1083,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-mjpg
STEP: Creating a pod to test atomic-volume-subpath
Sep 16 11:48:17.022: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mjpg" in namespace "subpath-5549" to be "Succeeded or Failed"
Sep 16 11:48:17.027: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.942114ms
Sep 16 11:48:19.036: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 2.01345147s
Sep 16 11:48:21.041: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 4.018235922s
Sep 16 11:48:23.045: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 6.022770101s
Sep 16 11:48:25.050: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 8.027275675s
Sep 16 11:48:27.054: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 10.031543485s
Sep 16 11:48:29.060: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 12.037546972s
Sep 16 11:48:31.065: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 14.042201015s
Sep 16 11:48:33.069: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 16.046509193s
Sep 16 11:48:35.073: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 18.051009757s
Sep 16 11:48:37.078: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Running", Reason="", readiness=true. Elapsed: 20.055441008s
Sep 16 11:48:39.083: INFO: Pod "pod-subpath-test-secret-mjpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.060440133s
STEP: Saw pod success
Sep 16 11:48:39.083: INFO: Pod "pod-subpath-test-secret-mjpg" satisfied condition "Succeeded or Failed"
Sep 16 11:48:39.086: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-subpath-test-secret-mjpg container test-container-subpath-secret-mjpg: <nil>
STEP: delete the pod
Sep 16 11:48:39.125: INFO: Waiting for pod pod-subpath-test-secret-mjpg to disappear
Sep 16 11:48:39.128: INFO: Pod pod-subpath-test-secret-mjpg no longer exists
STEP: Deleting pod pod-subpath-test-secret-mjpg
Sep 16 11:48:39.128: INFO: Deleting pod "pod-subpath-test-secret-mjpg" in namespace "subpath-5549"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":53,"skipped":1096,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-2815/configmap-test-2a7634b4-f111-4346-beb3-7598526c02b0
STEP: Creating a pod to test consume configMaps
Sep 16 11:48:39.200: INFO: Waiting up to 5m0s for pod "pod-configmaps-db43cf0f-af1b-47b3-abab-f41a8426585a" in namespace "configmap-2815" to be "Succeeded or Failed"
Sep 16 11:48:39.207: INFO: Pod "pod-configmaps-db43cf0f-af1b-47b3-abab-f41a8426585a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.469793ms
Sep 16 11:48:41.212: INFO: Pod "pod-configmaps-db43cf0f-af1b-47b3-abab-f41a8426585a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011093785s
STEP: Saw pod success
Sep 16 11:48:41.212: INFO: Pod "pod-configmaps-db43cf0f-af1b-47b3-abab-f41a8426585a" satisfied condition "Succeeded or Failed"
Sep 16 11:48:41.214: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-configmaps-db43cf0f-af1b-47b3-abab-f41a8426585a container env-test: <nil>
STEP: delete the pod
Sep 16 11:48:41.232: INFO: Waiting for pod pod-configmaps-db43cf0f-af1b-47b3-abab-f41a8426585a to disappear
Sep 16 11:48:41.236: INFO: Pod pod-configmaps-db43cf0f-af1b-47b3-abab-f41a8426585a no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:48:41.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2815" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":54,"skipped":1110,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
STEP: creating replication controller affinity-nodeport-timeout in namespace services-5690
I0916 11:48:43.559294   96838 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5690, replica count: 3
I0916 11:48:46.610389   96838 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 16 11:48:46.625: INFO: Creating new exec pod
Sep 16 11:48:49.682: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-5690 exec execpod-affinitynsgbl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
Sep 16 11:48:51.038: INFO: rc: 1
Sep 16 11:48:51.038: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-5690 exec execpod-affinitynsgbl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport-timeout 80
nc: connect to affinity-nodeport-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 11:48:52.039: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-5690 exec execpod-affinitynsgbl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
Sep 16 11:48:53.291: INFO: rc: 1
Sep 16 11:48:53.291: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-5690 exec execpod-affinitynsgbl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport-timeout 80
nc: connect to affinity-nodeport-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 11:48:54.039: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-5690 exec execpod-affinitynsgbl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
Sep 16 11:48:54.208: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
Sep 16 11:48:54.208: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 16 11:48:54.208: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-5690 exec execpod-affinitynsgbl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.243.50 80'
... skipping 47 lines ...
• [SLOW TEST:57.118 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":55,"skipped":1129,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 19 lines ...
• [SLOW TEST:10.135 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":346,"completed":56,"skipped":1151,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 16 11:49:48.560: INFO: Asynchronously running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-3248 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:49:48.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3248" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":346,"completed":57,"skipped":1152,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a volume subpath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 16 11:49:48.649: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Sep 16 11:49:48.734: INFO: Waiting up to 5m0s for pod "var-expansion-3d479720-5080-4857-b97d-d7e918de5209" in namespace "var-expansion-2456" to be "Succeeded or Failed"
Sep 16 11:49:48.743: INFO: Pod "var-expansion-3d479720-5080-4857-b97d-d7e918de5209": Phase="Pending", Reason="", readiness=false. Elapsed: 8.937929ms
Sep 16 11:49:50.749: INFO: Pod "var-expansion-3d479720-5080-4857-b97d-d7e918de5209": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014851033s
STEP: Saw pod success
Sep 16 11:49:50.749: INFO: Pod "var-expansion-3d479720-5080-4857-b97d-d7e918de5209" satisfied condition "Succeeded or Failed"
Sep 16 11:49:50.752: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod var-expansion-3d479720-5080-4857-b97d-d7e918de5209 container dapi-container: <nil>
STEP: delete the pod
Sep 16 11:49:50.781: INFO: Waiting for pod var-expansion-3d479720-5080-4857-b97d-d7e918de5209 to disappear
Sep 16 11:49:50.785: INFO: Pod var-expansion-3d479720-5080-4857-b97d-d7e918de5209 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:49:50.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2456" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":58,"skipped":1164,"failed":0}

------------------------------
[sig-node] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
Sep 16 11:49:50.862: INFO: The status of Pod pod-logs-websocket-57baea2f-edac-42ca-ae5d-68f5867ad219 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 11:49:52.869: INFO: The status of Pod pod-logs-websocket-57baea2f-edac-42ca-ae5d-68f5867ad219 is Running (Ready = true)
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:49:52.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4717" for this suite.
•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":59,"skipped":1164,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:49:54.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2130" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":60,"skipped":1193,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-a971a3f6-9c82-4f1a-b25b-d38d5c9127ff
STEP: Creating a pod to test consume secrets
Sep 16 11:49:54.964: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-18934806-6225-4709-a3ee-76ae73947424" in namespace "projected-2910" to be "Succeeded or Failed"
Sep 16 11:49:54.972: INFO: Pod "pod-projected-secrets-18934806-6225-4709-a3ee-76ae73947424": Phase="Pending", Reason="", readiness=false. Elapsed: 7.887305ms
Sep 16 11:49:56.985: INFO: Pod "pod-projected-secrets-18934806-6225-4709-a3ee-76ae73947424": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020696577s
STEP: Saw pod success
Sep 16 11:49:56.985: INFO: Pod "pod-projected-secrets-18934806-6225-4709-a3ee-76ae73947424" satisfied condition "Succeeded or Failed"
Sep 16 11:49:56.993: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-secrets-18934806-6225-4709-a3ee-76ae73947424 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 11:49:57.016: INFO: Waiting for pod pod-projected-secrets-18934806-6225-4709-a3ee-76ae73947424 to disappear
Sep 16 11:49:57.022: INFO: Pod pod-projected-secrets-18934806-6225-4709-a3ee-76ae73947424 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:49:57.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2910" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":61,"skipped":1195,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:50:00.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1751" for this suite.
STEP: Destroying namespace "webhook-1751-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":62,"skipped":1197,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 32 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":63,"skipped":1211,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 77 lines ...
• [SLOW TEST:15.035 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":64,"skipped":1241,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 11:50:40.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7d3b0b9-330f-4215-a8ca-603777d38f1f" in namespace "downward-api-1291" to be "Succeeded or Failed"
Sep 16 11:50:40.335: INFO: Pod "downwardapi-volume-b7d3b0b9-330f-4215-a8ca-603777d38f1f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.156924ms
Sep 16 11:50:42.340: INFO: Pod "downwardapi-volume-b7d3b0b9-330f-4215-a8ca-603777d38f1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011818039s
STEP: Saw pod success
Sep 16 11:50:42.340: INFO: Pod "downwardapi-volume-b7d3b0b9-330f-4215-a8ca-603777d38f1f" satisfied condition "Succeeded or Failed"
Sep 16 11:50:42.344: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-b7d3b0b9-330f-4215-a8ca-603777d38f1f container client-container: <nil>
STEP: delete the pod
Sep 16 11:50:42.370: INFO: Waiting for pod downwardapi-volume-b7d3b0b9-330f-4215-a8ca-603777d38f1f to disappear
Sep 16 11:50:42.373: INFO: Pod downwardapi-volume-b7d3b0b9-330f-4215-a8ca-603777d38f1f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:50:42.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1291" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":65,"skipped":1257,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should list and delete a collection of DaemonSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 33 lines ...
Sep 16 11:50:46.624: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"5194"},"items":[{"metadata":{"name":"daemon-set-fxpq5","generateName":"daemon-set-","namespace":"daemonsets-5560","uid":"ccd93381-cf54-40ee-a7f2-ee3967d39b45","resourceVersion":"5191","creationTimestamp":"2021-09-16T11:50:42Z","labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"7ee2b9a4-6427-4900-a91f-d6e11499300d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-16T11:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ee2b9a4-6427-4900-a91f-d6e11499300d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-16T11:50:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-fm8cq","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-fm8cq","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-5be7f4b0-16de-minion-group-2z4b","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-5be7f4b0-16de-minion-group-2z4b"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:42Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:46Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:46Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:42Z"}],"hostIP":"10.128.0.4","podIP":"10.64.2.74","podIPs":[{"ip":"10.64.2.74"}],"startTime":"2021-09-16T11:50:42Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-16T11:50:45Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://407e9054f2f03b801d70cd08e94327c422d3a015c2a8b93b33e4a3b8dba03950","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-hnlml","generateName":"daemon-set-","namespace":"daemonsets-5560","uid":"056b82a1-f638-41ea-ba4a-617bb50fc80e","resourceVersion":"5183","creationTimestamp":"2021-09-16T11:50:42Z","labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"7ee2b9a4-6427-4900-a91f-d6e11499300d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-16T11:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ee2b9a4-6427-4900-a91f-d6e11499300d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-16T11:50:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-hfpc9","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-hfpc9","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-5be7f4b0-16de-minion-group-j2m1","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-5be7f4b0-16de-minion-group-j2m1"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:42Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:45Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:45Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:42Z"}],"hostIP":"10.128.0.3","podIP":"10.64.1.28","podIPs":[{"ip":"10.64.1.28"}],"startTime":"2021-09-16T11:50:42Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-16T11:50:44Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://3a02a58a97f5a850119b9ca687956e213393f9dbe3c86e09a5dcb8009a05862d","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-nmgw8","generateName":"daemon-set-","namespace":"daemonsets-5560","uid":"e533dfdb-8459-4c08-bed3-5c14b6762f8c","resourceVersion":"5176","creationTimestamp":"2021-09-16T11:50:42Z","labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"7ee2b9a4-6427-4900-a91f-d6e11499300d","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-16T11:50:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7ee2b9a4-6427-4900-a91f-d6e11499300d\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-16T11:50:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-nwgs6","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-nwgs6","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-5be7f4b0-16de-minion-group-lhnl","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-5be7f4b0-16de-minion-group-lhnl"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:42Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:43Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:43Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-16T11:50:42Z"}],"hostIP":"10.128.0.5","podIP":"10.64.3.33","podIPs":[{"ip":"10.64.3.33"}],"startTime":"2021-09-16T11:50:42Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-16T11:50:43Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://c92b9ee3348c599ac9300030fd9336e5c4147b66a445d2375c07a6b6d9a2418a","started":true}],"qosClass":"BestEffort"}}]}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:50:46.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5560" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":66,"skipped":1263,"failed":0}

------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:50:49.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3605" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":67,"skipped":1263,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 19 lines ...
• [SLOW TEST:6.155 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":68,"skipped":1294,"failed":0}
[sig-network] EndpointSlice 
  should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 5 lines ...
[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:50:58.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-8290" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":69,"skipped":1294,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 47 lines ...
• [SLOW TEST:10.754 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":70,"skipped":1295,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 2 lines ...
Sep 16 11:51:08.819: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 16 11:51:10.897: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:51:10.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9655" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":71,"skipped":1312,"failed":0}
SSSSS
------------------------------
[sig-node] Pods 
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 30 lines ...
Sep 16 11:51:15.344: INFO: observed event type MODIFIED
Sep 16 11:51:15.354: INFO: observed event type MODIFIED
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:51:15.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3242" for this suite.
•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":72,"skipped":1317,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 50 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":73,"skipped":1323,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Discovery
... skipping 104 lines ...
Sep 16 11:51:40.298: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}]
Sep 16 11:51:40.298: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1
[AfterEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:51:40.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-8834" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":74,"skipped":1331,"failed":0}

------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 35 lines ...
• [SLOW TEST:68.862 seconds]
[sig-storage] EmptyDir wrapper volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":75,"skipped":1331,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 11:52:49.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23d1c79d-23f6-442e-aff9-694cd3b5a3f6" in namespace "projected-7193" to be "Succeeded or Failed"
Sep 16 11:52:49.230: INFO: Pod "downwardapi-volume-23d1c79d-23f6-442e-aff9-694cd3b5a3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.226866ms
Sep 16 11:52:51.234: INFO: Pod "downwardapi-volume-23d1c79d-23f6-442e-aff9-694cd3b5a3f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009675671s
STEP: Saw pod success
Sep 16 11:52:51.234: INFO: Pod "downwardapi-volume-23d1c79d-23f6-442e-aff9-694cd3b5a3f6" satisfied condition "Succeeded or Failed"
Sep 16 11:52:51.238: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-23d1c79d-23f6-442e-aff9-694cd3b5a3f6 container client-container: <nil>
STEP: delete the pod
Sep 16 11:52:51.290: INFO: Waiting for pod downwardapi-volume-23d1c79d-23f6-442e-aff9-694cd3b5a3f6 to disappear
Sep 16 11:52:51.300: INFO: Pod downwardapi-volume-23d1c79d-23f6-442e-aff9-694cd3b5a3f6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:52:51.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7193" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":76,"skipped":1376,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 27 lines ...
• [SLOW TEST:68.507 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":77,"skipped":1388,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":78,"skipped":1397,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 28 lines ...
• [SLOW TEST:8.137 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":79,"skipped":1397,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 5 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:54:16.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9551" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":80,"skipped":1410,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 21 lines ...
• [SLOW TEST:11.215 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":81,"skipped":1422,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 11:54:27.460: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 16 11:54:27.534: INFO: Waiting up to 5m0s for pod "pod-e7dcd1e1-1dcf-40f8-9e4c-8c07e114ee85" in namespace "emptydir-6527" to be "Succeeded or Failed"
Sep 16 11:54:27.542: INFO: Pod "pod-e7dcd1e1-1dcf-40f8-9e4c-8c07e114ee85": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182026ms
Sep 16 11:54:29.547: INFO: Pod "pod-e7dcd1e1-1dcf-40f8-9e4c-8c07e114ee85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012621652s
STEP: Saw pod success
Sep 16 11:54:29.547: INFO: Pod "pod-e7dcd1e1-1dcf-40f8-9e4c-8c07e114ee85" satisfied condition "Succeeded or Failed"
Sep 16 11:54:29.550: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-e7dcd1e1-1dcf-40f8-9e4c-8c07e114ee85 container test-container: <nil>
STEP: delete the pod
Sep 16 11:54:29.585: INFO: Waiting for pod pod-e7dcd1e1-1dcf-40f8-9e4c-8c07e114ee85 to disappear
Sep 16 11:54:29.588: INFO: Pod pod-e7dcd1e1-1dcf-40f8-9e4c-8c07e114ee85 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:54:29.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6527" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":82,"skipped":1427,"failed":0}
SSSS
------------------------------
[sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Events
... skipping 23 lines ...
• [SLOW TEST:6.141 seconds]
[sig-node] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":346,"completed":83,"skipped":1431,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 74 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:54:39.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7760" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":346,"completed":84,"skipped":1432,"failed":0}

------------------------------
[sig-node] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 11:54:39.938: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-3d3cf1f0-10ae-498b-9eb2-6b22b7bf37aa
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:54:40.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7183" for this suite.
•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":85,"skipped":1432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should block an eviction until the PDB is updated to allow it [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 33 lines ...
• [SLOW TEST:6.310 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":86,"skipped":1460,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 37 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1560
    should update a single-container pod's image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":346,"completed":87,"skipped":1464,"failed":0}
SSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 16 11:54:53.625: INFO: The status of Pod busybox-readonly-fs436063e0-b300-434a-819a-81e70319cf3f is Pending, waiting for it to be Running (with Ready = true)
Sep 16 11:54:55.629: INFO: The status of Pod busybox-readonly-fs436063e0-b300-434a-819a-81e70319cf3f is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:54:55.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9717" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":88,"skipped":1471,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 16 11:54:55.702: INFO: The status of Pod busybox-scheduling-552df825-7578-48f1-a834-63cffe580bea is Pending, waiting for it to be Running (with Ready = true)
Sep 16 11:54:57.706: INFO: The status of Pod busybox-scheduling-552df825-7578-48f1-a834-63cffe580bea is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:54:57.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2226" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":89,"skipped":1484,"failed":0}
S
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 59 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform rolling updates and roll backs of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":90,"skipped":1485,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
• [SLOW TEST:6.457 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":91,"skipped":1493,"failed":0}
SSSSS
------------------------------
[sig-network] IngressClass API 
   should support creating IngressClass API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] IngressClass API
... skipping 21 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:56:35.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-2611" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":346,"completed":92,"skipped":1498,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Sep 16 11:56:35.780: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-708  c02100ac-8b53-4e69-a339-8cb960b7b9b9 7221 0 2021-09-16 11:56:35 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-09-16 11:56:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Sep 16 11:56:35.780: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-708  c02100ac-8b53-4e69-a339-8cb960b7b9b9 7222 0 2021-09-16 11:56:35 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-09-16 11:56:35 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:56:35.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-708" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":93,"skipped":1507,"failed":0}
SS
------------------------------
[sig-apps] DisruptionController 
  should create a PodDisruptionBudget [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 14 lines ...
STEP: Waiting for the pdb to be processed
STEP: Waiting for the pdb to be deleted
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:56:39.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-4358" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":94,"skipped":1509,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should list, patch and delete a collection of StatefulSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 31 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should list, patch and delete a collection of StatefulSets [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":95,"skipped":1534,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 11:57:00.255: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75729b9c-263d-476a-8093-cbc9e75566fa" in namespace "projected-1104" to be "Succeeded or Failed"
Sep 16 11:57:00.261: INFO: Pod "downwardapi-volume-75729b9c-263d-476a-8093-cbc9e75566fa": Phase="Pending", Reason="", readiness=false. Elapsed: 5.709171ms
Sep 16 11:57:02.267: INFO: Pod "downwardapi-volume-75729b9c-263d-476a-8093-cbc9e75566fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011227027s
STEP: Saw pod success
Sep 16 11:57:02.267: INFO: Pod "downwardapi-volume-75729b9c-263d-476a-8093-cbc9e75566fa" satisfied condition "Succeeded or Failed"
Sep 16 11:57:02.272: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-75729b9c-263d-476a-8093-cbc9e75566fa container client-container: <nil>
STEP: delete the pod
Sep 16 11:57:02.354: INFO: Waiting for pod downwardapi-volume-75729b9c-263d-476a-8093-cbc9e75566fa to disappear
Sep 16 11:57:02.374: INFO: Pod downwardapi-volume-75729b9c-263d-476a-8093-cbc9e75566fa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:57:02.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1104" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":96,"skipped":1553,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Sep 16 11:57:02.824: INFO: stderr: ""
Sep 16 11:57:02.824: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncloud.google.com/v1\ncloud.google.com/v1beta1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nmetrics.k8s.io/v1beta1\nnetworking.gke.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscalingpolicy.kope.io/v1alpha1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:57:02.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9410" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":346,"completed":97,"skipped":1556,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 16 11:57:11.146: INFO: File wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:11.156: INFO: File jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:11.156: INFO: Lookups using dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e failed for: [wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local]

Sep 16 11:57:16.166: INFO: File wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:16.190: INFO: File jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:16.190: INFO: Lookups using dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e failed for: [wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local]

Sep 16 11:57:21.167: INFO: File wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:21.173: INFO: File jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:21.173: INFO: Lookups using dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e failed for: [wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local]

Sep 16 11:57:26.167: INFO: File wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:26.177: INFO: File jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:26.177: INFO: Lookups using dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e failed for: [wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local]

Sep 16 11:57:31.164: INFO: File wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:31.170: INFO: File jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local from pod  dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 11:57:31.170: INFO: Lookups using dns-9680/dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e failed for: [wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local jessie_udp@dns-test-service-3.dns-9680.svc.cluster.local]

Sep 16 11:57:36.178: INFO: DNS probes using dns-test-7ccf3979-95bb-4169-a034-8df2298bec4e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9680.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9680.svc.cluster.local; sleep 1; done
... skipping 16 lines ...
• [SLOW TEST:37.525 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":98,"skipped":1567,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Sep 16 11:57:43.354: INFO: stderr: ""
Sep 16 11:57:43.354: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:57:43.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1369" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":99,"skipped":1638,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 16 11:57:43.440: INFO: The status of Pod busybox-host-aliases79829a07-434a-4b80-ab2d-0de8131ec5d2 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 11:57:45.448: INFO: The status of Pod busybox-host-aliases79829a07-434a-4b80-ab2d-0de8131ec5d2 is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:57:45.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1928" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":100,"skipped":1654,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-ed9c1fc1-ceb5-4d10-a240-908edc562cda
STEP: Creating a pod to test consume configMaps
Sep 16 11:57:45.574: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8340950a-ff59-4991-84be-da2da7d7288b" in namespace "projected-6369" to be "Succeeded or Failed"
Sep 16 11:57:45.589: INFO: Pod "pod-projected-configmaps-8340950a-ff59-4991-84be-da2da7d7288b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.344029ms
Sep 16 11:57:47.594: INFO: Pod "pod-projected-configmaps-8340950a-ff59-4991-84be-da2da7d7288b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020420945s
STEP: Saw pod success
Sep 16 11:57:47.594: INFO: Pod "pod-projected-configmaps-8340950a-ff59-4991-84be-da2da7d7288b" satisfied condition "Succeeded or Failed"
Sep 16 11:57:47.598: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-configmaps-8340950a-ff59-4991-84be-da2da7d7288b container agnhost-container: <nil>
STEP: delete the pod
Sep 16 11:57:47.624: INFO: Waiting for pod pod-projected-configmaps-8340950a-ff59-4991-84be-da2da7d7288b to disappear
Sep 16 11:57:47.628: INFO: Pod pod-projected-configmaps-8340950a-ff59-4991-84be-da2da7d7288b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:57:47.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6369" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":101,"skipped":1660,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-64092b75-aeef-4d06-90e2-9d6fbd49650a
STEP: Creating a pod to test consume secrets
Sep 16 11:57:47.717: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-78bd356a-ee92-4d0c-985c-fe26b731587f" in namespace "projected-3005" to be "Succeeded or Failed"
Sep 16 11:57:47.737: INFO: Pod "pod-projected-secrets-78bd356a-ee92-4d0c-985c-fe26b731587f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.439332ms
Sep 16 11:57:49.741: INFO: Pod "pod-projected-secrets-78bd356a-ee92-4d0c-985c-fe26b731587f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.02370232s
STEP: Saw pod success
Sep 16 11:57:49.741: INFO: Pod "pod-projected-secrets-78bd356a-ee92-4d0c-985c-fe26b731587f" satisfied condition "Succeeded or Failed"
Sep 16 11:57:49.744: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-secrets-78bd356a-ee92-4d0c-985c-fe26b731587f container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 11:57:49.765: INFO: Waiting for pod pod-projected-secrets-78bd356a-ee92-4d0c-985c-fe26b731587f to disappear
Sep 16 11:57:49.771: INFO: Pod pod-projected-secrets-78bd356a-ee92-4d0c-985c-fe26b731587f no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:57:49.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3005" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":102,"skipped":1667,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:57:53.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1294" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":103,"skipped":1687,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 15 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":346,"completed":104,"skipped":1722,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-6552a470-fbcb-4fab-ae44-b4b5cf9105f3
STEP: Creating a pod to test consume configMaps
Sep 16 11:58:06.072: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2ad3e904-46ae-4269-a2b2-60d30c6f6a63" in namespace "projected-4054" to be "Succeeded or Failed"
Sep 16 11:58:06.076: INFO: Pod "pod-projected-configmaps-2ad3e904-46ae-4269-a2b2-60d30c6f6a63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.359361ms
Sep 16 11:58:08.081: INFO: Pod "pod-projected-configmaps-2ad3e904-46ae-4269-a2b2-60d30c6f6a63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009918508s
STEP: Saw pod success
Sep 16 11:58:08.082: INFO: Pod "pod-projected-configmaps-2ad3e904-46ae-4269-a2b2-60d30c6f6a63" satisfied condition "Succeeded or Failed"
Sep 16 11:58:08.084: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-configmaps-2ad3e904-46ae-4269-a2b2-60d30c6f6a63 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 11:58:08.116: INFO: Waiting for pod pod-projected-configmaps-2ad3e904-46ae-4269-a2b2-60d30c6f6a63 to disappear
Sep 16 11:58:08.123: INFO: Pod pod-projected-configmaps-2ad3e904-46ae-4269-a2b2-60d30c6f6a63 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:58:08.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4054" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":105,"skipped":1730,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 126 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should scale a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":346,"completed":106,"skipped":1755,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 11:58:17.166: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d60fb55f-36fd-4d7c-905f-2e6ca72cad65" in namespace "downward-api-6539" to be "Succeeded or Failed"
Sep 16 11:58:17.177: INFO: Pod "downwardapi-volume-d60fb55f-36fd-4d7c-905f-2e6ca72cad65": Phase="Pending", Reason="", readiness=false. Elapsed: 10.881925ms
Sep 16 11:58:19.182: INFO: Pod "downwardapi-volume-d60fb55f-36fd-4d7c-905f-2e6ca72cad65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015855524s
STEP: Saw pod success
Sep 16 11:58:19.182: INFO: Pod "downwardapi-volume-d60fb55f-36fd-4d7c-905f-2e6ca72cad65" satisfied condition "Succeeded or Failed"
Sep 16 11:58:19.185: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-d60fb55f-36fd-4d7c-905f-2e6ca72cad65 container client-container: <nil>
STEP: delete the pod
Sep 16 11:58:19.209: INFO: Waiting for pod downwardapi-volume-d60fb55f-36fd-4d7c-905f-2e6ca72cad65 to disappear
Sep 16 11:58:19.213: INFO: Pod downwardapi-volume-d60fb55f-36fd-4d7c-905f-2e6ca72cad65 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:58:19.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6539" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":107,"skipped":1775,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-f58c1231-eebf-4c36-815e-58b9ffbd1c33
STEP: Creating secret with name secret-projected-all-test-volume-e3559ae2-bdfe-4e10-ab9c-354b7880748a
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep 16 11:58:19.291: INFO: Waiting up to 5m0s for pod "projected-volume-e29712fe-be5d-4f5b-99db-db043ba31abd" in namespace "projected-7396" to be "Succeeded or Failed"
Sep 16 11:58:19.296: INFO: Pod "projected-volume-e29712fe-be5d-4f5b-99db-db043ba31abd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.394694ms
Sep 16 11:58:21.301: INFO: Pod "projected-volume-e29712fe-be5d-4f5b-99db-db043ba31abd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010363252s
STEP: Saw pod success
Sep 16 11:58:21.301: INFO: Pod "projected-volume-e29712fe-be5d-4f5b-99db-db043ba31abd" satisfied condition "Succeeded or Failed"
Sep 16 11:58:21.304: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod projected-volume-e29712fe-be5d-4f5b-99db-db043ba31abd container projected-all-volume-test: <nil>
STEP: delete the pod
Sep 16 11:58:21.357: INFO: Waiting for pod projected-volume-e29712fe-be5d-4f5b-99db-db043ba31abd to disappear
Sep 16 11:58:21.360: INFO: Pod projected-volume-e29712fe-be5d-4f5b-99db-db043ba31abd no longer exists
[AfterEach] [sig-storage] Projected combined
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:58:21.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7396" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":108,"skipped":1782,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 113 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":109,"skipped":1816,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 27 lines ...
• [SLOW TEST:22.097 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":1879,"failed":0}
SSSSSS
------------------------------
[sig-node] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-6867/secret-test-264e3540-e2a1-4d57-a03a-6afb0ae393fa
STEP: Creating a pod to test consume secrets
Sep 16 11:59:45.537: INFO: Waiting up to 5m0s for pod "pod-configmaps-3463eb28-5352-4fc8-8cec-17f556a09d89" in namespace "secrets-6867" to be "Succeeded or Failed"
Sep 16 11:59:45.545: INFO: Pod "pod-configmaps-3463eb28-5352-4fc8-8cec-17f556a09d89": Phase="Pending", Reason="", readiness=false. Elapsed: 8.431232ms
Sep 16 11:59:47.554: INFO: Pod "pod-configmaps-3463eb28-5352-4fc8-8cec-17f556a09d89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016935253s
STEP: Saw pod success
Sep 16 11:59:47.554: INFO: Pod "pod-configmaps-3463eb28-5352-4fc8-8cec-17f556a09d89" satisfied condition "Succeeded or Failed"
Sep 16 11:59:47.559: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-configmaps-3463eb28-5352-4fc8-8cec-17f556a09d89 container env-test: <nil>
STEP: delete the pod
Sep 16 11:59:47.601: INFO: Waiting for pod pod-configmaps-3463eb28-5352-4fc8-8cec-17f556a09d89 to disappear
Sep 16 11:59:47.615: INFO: Pod pod-configmaps-3463eb28-5352-4fc8-8cec-17f556a09d89 no longer exists
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 11:59:47.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6867" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":111,"skipped":1885,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 11:59:47.633: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 16 11:59:47.913: INFO: PodSpec: initContainers in spec.initContainers
I0916 12:00:24.125173    2874 boskos.go:86] Sending heartbeat to Boskos
Sep 16 12:00:32.866: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-83ebc832-0653-48f6-82c7-a756a7f1c6d1", GenerateName:"", Namespace:"init-container-8886", SelfLink:"", UID:"a50f711a-8451-4650-9226-1e1c4a5c51c4", ResourceVersion:"8282", Generation:0, CreationTimestamp:time.Date(2021, time.September, 16, 11, 59, 47, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"913119564"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 16, 11, 59, 47, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d3c420), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 16, 11, 59, 49, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc002d3c450), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-qnmw6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00067e760), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qnmw6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qnmw6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-qnmw6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004672a78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kt2-5be7f4b0-16de-minion-group-2z4b", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000eee310), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004672af0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004672b10)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004672b18), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004672b1c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003fd8690), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 16, 11, 59, 47, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 16, 11, 59, 47, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 16, 11, 59, 47, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 16, 11, 59, 47, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.128.0.4", PodIP:"10.64.2.133", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.2.133"}}, StartTime:time.Date(2021, time.September, 16, 11, 59, 47, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000eee460)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000eee4d0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://5b8f2a5a0ef379f2dd490be560dbbd2408d3ae829cabb93524fb81697a2cd2b8", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00067e820), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00067e800), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc004672b9f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:00:32.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8886" for this suite.

• [SLOW TEST:45.243 seconds]
[sig-node] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":112,"skipped":1961,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:00:32.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d51b0d45-0c18-41d8-a9e9-4637877dd88a" in namespace "projected-1081" to be "Succeeded or Failed"
Sep 16 12:00:32.945: INFO: Pod "downwardapi-volume-d51b0d45-0c18-41d8-a9e9-4637877dd88a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.474774ms
Sep 16 12:00:34.949: INFO: Pod "downwardapi-volume-d51b0d45-0c18-41d8-a9e9-4637877dd88a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008568084s
STEP: Saw pod success
Sep 16 12:00:34.949: INFO: Pod "downwardapi-volume-d51b0d45-0c18-41d8-a9e9-4637877dd88a" satisfied condition "Succeeded or Failed"
Sep 16 12:00:34.953: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-d51b0d45-0c18-41d8-a9e9-4637877dd88a container client-container: <nil>
STEP: delete the pod
Sep 16 12:00:34.996: INFO: Waiting for pod downwardapi-volume-d51b0d45-0c18-41d8-a9e9-4637877dd88a to disappear
Sep 16 12:00:35.000: INFO: Pod downwardapi-volume-d51b0d45-0c18-41d8-a9e9-4637877dd88a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:00:35.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1081" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":113,"skipped":1967,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 18 lines ...
• [SLOW TEST:6.657 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":114,"skipped":1973,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:00:41.670: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 16 12:00:41.728: INFO: Waiting up to 5m0s for pod "pod-0fc4e876-0e16-40e5-aeff-e8539792d442" in namespace "emptydir-7053" to be "Succeeded or Failed"
Sep 16 12:00:41.734: INFO: Pod "pod-0fc4e876-0e16-40e5-aeff-e8539792d442": Phase="Pending", Reason="", readiness=false. Elapsed: 5.84104ms
Sep 16 12:00:43.739: INFO: Pod "pod-0fc4e876-0e16-40e5-aeff-e8539792d442": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011364579s
STEP: Saw pod success
Sep 16 12:00:43.739: INFO: Pod "pod-0fc4e876-0e16-40e5-aeff-e8539792d442" satisfied condition "Succeeded or Failed"
Sep 16 12:00:43.742: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-0fc4e876-0e16-40e5-aeff-e8539792d442 container test-container: <nil>
STEP: delete the pod
Sep 16 12:00:43.766: INFO: Waiting for pod pod-0fc4e876-0e16-40e5-aeff-e8539792d442 to disappear
Sep 16 12:00:43.770: INFO: Pod pod-0fc4e876-0e16-40e5-aeff-e8539792d442 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:00:43.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7053" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":115,"skipped":2027,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:00:43.847: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8044d2a-9cb2-446f-999f-53ba1efb7bec" in namespace "projected-8463" to be "Succeeded or Failed"
Sep 16 12:00:43.854: INFO: Pod "downwardapi-volume-a8044d2a-9cb2-446f-999f-53ba1efb7bec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.488222ms
Sep 16 12:00:45.859: INFO: Pod "downwardapi-volume-a8044d2a-9cb2-446f-999f-53ba1efb7bec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012172352s
STEP: Saw pod success
Sep 16 12:00:45.859: INFO: Pod "downwardapi-volume-a8044d2a-9cb2-446f-999f-53ba1efb7bec" satisfied condition "Succeeded or Failed"
Sep 16 12:00:45.862: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-a8044d2a-9cb2-446f-999f-53ba1efb7bec container client-container: <nil>
STEP: delete the pod
Sep 16 12:00:45.884: INFO: Waiting for pod downwardapi-volume-a8044d2a-9cb2-446f-999f-53ba1efb7bec to disappear
Sep 16 12:00:45.888: INFO: Pod downwardapi-volume-a8044d2a-9cb2-446f-999f-53ba1efb7bec no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:00:45.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8463" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":116,"skipped":2041,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 31 lines ...
• [SLOW TEST:7.824 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":117,"skipped":2070,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 13 lines ...
Sep 16 12:00:56.889: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:00:56.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9410" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":118,"skipped":2078,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Sep 16 12:00:59.312: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 16 12:00:59.312: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:00:59.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2305" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":346,"completed":119,"skipped":2127,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
• [SLOW TEST:11.141 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":120,"skipped":2146,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 51 lines ...
• [SLOW TEST:10.714 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":121,"skipped":2182,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 49 lines ...
• [SLOW TEST:10.235 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":122,"skipped":2191,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:01:35.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-388" for this suite.
STEP: Destroying namespace "webhook-388-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":123,"skipped":2218,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-auth] Certificates API [Privileged:ClusterAdmin] 
  should support CSR API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:01:36.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-400" for this suite.
•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":124,"skipped":2233,"failed":0}
SSSS
------------------------------
[sig-apps] CronJob 
  should schedule multiple jobs concurrently [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 16 lines ...
• [SLOW TEST:84.159 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":125,"skipped":2237,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 19 lines ...
• [SLOW TEST:360.188 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":126,"skipped":2259,"failed":0}
SSS
------------------------------
[sig-node] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 14 lines ...
• [SLOW TEST:60.109 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":127,"skipped":2262,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:10:01.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-246" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":128,"skipped":2266,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] RuntimeClass 
   should support RuntimeClasses API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] RuntimeClass
... skipping 18 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-node] RuntimeClass
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:10:01.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-6378" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":346,"completed":129,"skipped":2281,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:10:01.196: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 16 12:10:01.246: INFO: Waiting up to 5m0s for pod "pod-31f0738a-f8e0-46b9-be55-91483a603aeb" in namespace "emptydir-4635" to be "Succeeded or Failed"
Sep 16 12:10:01.253: INFO: Pod "pod-31f0738a-f8e0-46b9-be55-91483a603aeb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.516411ms
Sep 16 12:10:03.258: INFO: Pod "pod-31f0738a-f8e0-46b9-be55-91483a603aeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01165818s
STEP: Saw pod success
Sep 16 12:10:03.258: INFO: Pod "pod-31f0738a-f8e0-46b9-be55-91483a603aeb" satisfied condition "Succeeded or Failed"
Sep 16 12:10:03.262: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-31f0738a-f8e0-46b9-be55-91483a603aeb container test-container: <nil>
STEP: delete the pod
Sep 16 12:10:03.327: INFO: Waiting for pod pod-31f0738a-f8e0-46b9-be55-91483a603aeb to disappear
Sep 16 12:10:03.336: INFO: Pod pod-31f0738a-f8e0-46b9-be55-91483a603aeb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:10:03.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4635" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":130,"skipped":2299,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
• [SLOW TEST:16.259 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":131,"skipped":2337,"failed":0}
SS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Sep 16 12:10:22.043: INFO: Unable to read jessie_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:22.049: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:22.055: INFO: Unable to read jessie_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:22.060: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:22.064: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:22.068: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:22.156: INFO: Lookups using dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4742 wheezy_tcp@dns-test-service.dns-4742 wheezy_udp@dns-test-service.dns-4742.svc wheezy_tcp@dns-test-service.dns-4742.svc wheezy_udp@_http._tcp.dns-test-service.dns-4742.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4742.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4742 jessie_tcp@dns-test-service.dns-4742 jessie_udp@dns-test-service.dns-4742.svc jessie_tcp@dns-test-service.dns-4742.svc jessie_udp@_http._tcp.dns-test-service.dns-4742.svc jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc]

I0916 12:10:24.170705    2874 boskos.go:86] Sending heartbeat to Boskos
Sep 16 12:10:27.165: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.172: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.188: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.195: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
... skipping 6 lines ...
Sep 16 12:10:27.358: INFO: Unable to read jessie_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.364: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.443: INFO: Unable to read jessie_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.450: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.461: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.468: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:27.537: INFO: Lookups using dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4742 wheezy_tcp@dns-test-service.dns-4742 wheezy_udp@dns-test-service.dns-4742.svc wheezy_tcp@dns-test-service.dns-4742.svc wheezy_udp@_http._tcp.dns-test-service.dns-4742.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4742.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4742 jessie_tcp@dns-test-service.dns-4742 jessie_udp@dns-test-service.dns-4742.svc jessie_tcp@dns-test-service.dns-4742.svc jessie_udp@_http._tcp.dns-test-service.dns-4742.svc jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc]

Sep 16 12:10:32.198: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.204: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.210: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.216: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.221: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
... skipping 5 lines ...
Sep 16 12:10:32.279: INFO: Unable to read jessie_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.336: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.345: INFO: Unable to read jessie_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.351: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.358: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.364: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:32.456: INFO: Lookups using dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4742 wheezy_tcp@dns-test-service.dns-4742 wheezy_udp@dns-test-service.dns-4742.svc wheezy_tcp@dns-test-service.dns-4742.svc wheezy_udp@_http._tcp.dns-test-service.dns-4742.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4742.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4742 jessie_tcp@dns-test-service.dns-4742 jessie_udp@dns-test-service.dns-4742.svc jessie_tcp@dns-test-service.dns-4742.svc jessie_udp@_http._tcp.dns-test-service.dns-4742.svc jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc]

Sep 16 12:10:37.165: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.171: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.177: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.188: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
... skipping 5 lines ...
Sep 16 12:10:37.337: INFO: Unable to read jessie_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.343: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.350: INFO: Unable to read jessie_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.357: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.371: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.383: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:37.562: INFO: Lookups using dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4742 wheezy_tcp@dns-test-service.dns-4742 wheezy_udp@dns-test-service.dns-4742.svc wheezy_tcp@dns-test-service.dns-4742.svc wheezy_udp@_http._tcp.dns-test-service.dns-4742.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4742.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4742 jessie_tcp@dns-test-service.dns-4742 jessie_udp@dns-test-service.dns-4742.svc jessie_tcp@dns-test-service.dns-4742.svc jessie_udp@_http._tcp.dns-test-service.dns-4742.svc jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc]

Sep 16 12:10:42.164: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.170: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.176: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.183: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.189: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
... skipping 5 lines ...
Sep 16 12:10:42.352: INFO: Unable to read jessie_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.369: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.383: INFO: Unable to read jessie_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.537: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.547: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.555: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:42.587: INFO: Lookups using dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4742 wheezy_tcp@dns-test-service.dns-4742 wheezy_udp@dns-test-service.dns-4742.svc wheezy_tcp@dns-test-service.dns-4742.svc wheezy_udp@_http._tcp.dns-test-service.dns-4742.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4742.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4742 jessie_tcp@dns-test-service.dns-4742 jessie_udp@dns-test-service.dns-4742.svc jessie_tcp@dns-test-service.dns-4742.svc jessie_udp@_http._tcp.dns-test-service.dns-4742.svc jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc]

Sep 16 12:10:47.212: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.218: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.231: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.234: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.239: INFO: Unable to read wheezy_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
... skipping 5 lines ...
Sep 16 12:10:47.364: INFO: Unable to read jessie_udp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.369: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742 from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.380: INFO: Unable to read jessie_udp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.458: INFO: Unable to read jessie_tcp@dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.520: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.550: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc from pod dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38: the server could not find the requested resource (get pods dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38)
Sep 16 12:10:47.649: INFO: Lookups using dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4742 wheezy_tcp@dns-test-service.dns-4742 wheezy_udp@dns-test-service.dns-4742.svc wheezy_tcp@dns-test-service.dns-4742.svc wheezy_udp@_http._tcp.dns-test-service.dns-4742.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4742.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4742 jessie_tcp@dns-test-service.dns-4742 jessie_udp@dns-test-service.dns-4742.svc jessie_tcp@dns-test-service.dns-4742.svc jessie_udp@_http._tcp.dns-test-service.dns-4742.svc jessie_tcp@_http._tcp.dns-test-service.dns-4742.svc]

Sep 16 12:10:52.566: INFO: DNS probes using dns-4742/dns-test-0a4edcad-7f72-4b80-ad1b-966f6cc0eb38 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:33.208 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":132,"skipped":2339,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:28.121 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":133,"skipped":2349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:11:21.009: INFO: Waiting up to 5m0s for pod "downwardapi-volume-32d11da5-76de-4fec-a154-f815a4a3a91d" in namespace "projected-5497" to be "Succeeded or Failed"
Sep 16 12:11:21.012: INFO: Pod "downwardapi-volume-32d11da5-76de-4fec-a154-f815a4a3a91d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.920652ms
Sep 16 12:11:23.018: INFO: Pod "downwardapi-volume-32d11da5-76de-4fec-a154-f815a4a3a91d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008992702s
STEP: Saw pod success
Sep 16 12:11:23.018: INFO: Pod "downwardapi-volume-32d11da5-76de-4fec-a154-f815a4a3a91d" satisfied condition "Succeeded or Failed"
Sep 16 12:11:23.022: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-32d11da5-76de-4fec-a154-f815a4a3a91d container client-container: <nil>
STEP: delete the pod
Sep 16 12:11:23.047: INFO: Waiting for pod downwardapi-volume-32d11da5-76de-4fec-a154-f815a4a3a91d to disappear
Sep 16 12:11:23.050: INFO: Pod downwardapi-volume-32d11da5-76de-4fec-a154-f815a4a3a91d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:11:23.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5497" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":134,"skipped":2403,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:11:23.060: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 16 12:11:23.115: INFO: Waiting up to 5m0s for pod "pod-121df3f0-5c2d-49f5-987b-dafd4a0a05b9" in namespace "emptydir-6648" to be "Succeeded or Failed"
Sep 16 12:11:23.122: INFO: Pod "pod-121df3f0-5c2d-49f5-987b-dafd4a0a05b9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.391886ms
Sep 16 12:11:25.127: INFO: Pod "pod-121df3f0-5c2d-49f5-987b-dafd4a0a05b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012239889s
STEP: Saw pod success
Sep 16 12:11:25.127: INFO: Pod "pod-121df3f0-5c2d-49f5-987b-dafd4a0a05b9" satisfied condition "Succeeded or Failed"
Sep 16 12:11:25.131: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-121df3f0-5c2d-49f5-987b-dafd4a0a05b9 container test-container: <nil>
STEP: delete the pod
Sep 16 12:11:25.155: INFO: Waiting for pod pod-121df3f0-5c2d-49f5-987b-dafd4a0a05b9 to disappear
Sep 16 12:11:25.159: INFO: Pod pod-121df3f0-5c2d-49f5-987b-dafd4a0a05b9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:11:25.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6648" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":135,"skipped":2449,"failed":0}
SSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:243.562 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":136,"skipped":2453,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 15 lines ...
• [SLOW TEST:7.187 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":137,"skipped":2492,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  Replace and Patch tests [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 24 lines ...
• [SLOW TEST:7.285 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":138,"skipped":2503,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 16 12:15:43.355: INFO: stderr: ""
Sep 16 12:15:43.356: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23+\", GitVersion:\"v1.23.0-alpha.2.40+bea2e462a5b8c2\", GitCommit:\"bea2e462a5b8c2bfe05a4f07688d06520c21a19a\", GitTreeState:\"clean\", BuildDate:\"2021-09-16T08:53:46Z\", GoVersion:\"go1.17.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23+\", GitVersion:\"v1.23.0-alpha.2.40+bea2e462a5b8c2\", GitCommit:\"bea2e462a5b8c2bfe05a4f07688d06520c21a19a\", GitTreeState:\"clean\", BuildDate:\"2021-09-16T08:53:46Z\", GoVersion:\"go1.17.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:15:43.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8686" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":346,"completed":139,"skipped":2569,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 16 12:15:43.365: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Sep 16 12:15:43.421: INFO: Waiting up to 5m0s for pod "var-expansion-832e946c-37a3-4d92-b40d-e4a45454e6a2" in namespace "var-expansion-4297" to be "Succeeded or Failed"
Sep 16 12:15:43.428: INFO: Pod "var-expansion-832e946c-37a3-4d92-b40d-e4a45454e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.574687ms
Sep 16 12:15:45.432: INFO: Pod "var-expansion-832e946c-37a3-4d92-b40d-e4a45454e6a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010771218s
STEP: Saw pod success
Sep 16 12:15:45.432: INFO: Pod "var-expansion-832e946c-37a3-4d92-b40d-e4a45454e6a2" satisfied condition "Succeeded or Failed"
Sep 16 12:15:45.436: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod var-expansion-832e946c-37a3-4d92-b40d-e4a45454e6a2 container dapi-container: <nil>
STEP: delete the pod
Sep 16 12:15:45.487: INFO: Waiting for pod var-expansion-832e946c-37a3-4d92-b40d-e4a45454e6a2 to disappear
Sep 16 12:15:45.491: INFO: Pod var-expansion-832e946c-37a3-4d92-b40d-e4a45454e6a2 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:15:45.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4297" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":140,"skipped":2586,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:15:45.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3078609-a7af-4eb2-94cf-63b2017a527c" in namespace "projected-6908" to be "Succeeded or Failed"
Sep 16 12:15:45.696: INFO: Pod "downwardapi-volume-e3078609-a7af-4eb2-94cf-63b2017a527c": Phase="Pending", Reason="", readiness=false. Elapsed: 47.572839ms
Sep 16 12:15:47.701: INFO: Pod "downwardapi-volume-e3078609-a7af-4eb2-94cf-63b2017a527c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.052752771s
STEP: Saw pod success
Sep 16 12:15:47.702: INFO: Pod "downwardapi-volume-e3078609-a7af-4eb2-94cf-63b2017a527c" satisfied condition "Succeeded or Failed"
Sep 16 12:15:47.705: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-e3078609-a7af-4eb2-94cf-63b2017a527c container client-container: <nil>
STEP: delete the pod
Sep 16 12:15:47.735: INFO: Waiting for pod downwardapi-volume-e3078609-a7af-4eb2-94cf-63b2017a527c to disappear
Sep 16 12:15:47.740: INFO: Pod downwardapi-volume-e3078609-a7af-4eb2-94cf-63b2017a527c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:15:47.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6908" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":141,"skipped":2686,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:15:51.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4386" for this suite.
STEP: Destroying namespace "webhook-4386-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":142,"skipped":2688,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 31 lines ...
• [SLOW TEST:7.886 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":143,"skipped":2688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 16 12:15:59.934: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Sep 16 12:16:00.004: INFO: Waiting up to 5m0s for pod "var-expansion-51b8f9bf-5ae7-423e-9d0b-151f3371cd14" in namespace "var-expansion-7460" to be "Succeeded or Failed"
Sep 16 12:16:00.009: INFO: Pod "var-expansion-51b8f9bf-5ae7-423e-9d0b-151f3371cd14": Phase="Pending", Reason="", readiness=false. Elapsed: 5.180366ms
Sep 16 12:16:02.014: INFO: Pod "var-expansion-51b8f9bf-5ae7-423e-9d0b-151f3371cd14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010321011s
STEP: Saw pod success
Sep 16 12:16:02.014: INFO: Pod "var-expansion-51b8f9bf-5ae7-423e-9d0b-151f3371cd14" satisfied condition "Succeeded or Failed"
Sep 16 12:16:02.018: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod var-expansion-51b8f9bf-5ae7-423e-9d0b-151f3371cd14 container dapi-container: <nil>
STEP: delete the pod
Sep 16 12:16:02.042: INFO: Waiting for pod var-expansion-51b8f9bf-5ae7-423e-9d0b-151f3371cd14 to disappear
Sep 16 12:16:02.046: INFO: Pod var-expansion-51b8f9bf-5ae7-423e-9d0b-151f3371cd14 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:16:02.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7460" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":144,"skipped":2722,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should test the lifecycle of an Endpoint [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:16:02.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8407" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":145,"skipped":2745,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 27 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":146,"skipped":2765,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 37 lines ...
• [SLOW TEST:12.302 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":147,"skipped":2794,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:11.223 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":148,"skipped":2800,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:8.180 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":149,"skipped":2848,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:243.335 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":150,"skipped":2858,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-w677
STEP: Creating a pod to test atomic-volume-subpath
Sep 16 12:21:40.592: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-w677" in namespace "subpath-8297" to be "Succeeded or Failed"
Sep 16 12:21:40.597: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Pending", Reason="", readiness=false. Elapsed: 5.048502ms
Sep 16 12:21:42.602: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 2.009257739s
Sep 16 12:21:44.605: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 4.013014977s
Sep 16 12:21:46.610: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 6.018108023s
Sep 16 12:21:48.623: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 8.030366276s
Sep 16 12:21:50.628: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 10.035495519s
Sep 16 12:21:52.634: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 12.041355085s
Sep 16 12:21:54.638: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 14.045622132s
Sep 16 12:21:56.643: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 16.050944634s
Sep 16 12:21:58.649: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 18.056840027s
Sep 16 12:22:00.653: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Running", Reason="", readiness=true. Elapsed: 20.061033017s
Sep 16 12:22:02.659: INFO: Pod "pod-subpath-test-downwardapi-w677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.066523192s
STEP: Saw pod success
Sep 16 12:22:02.659: INFO: Pod "pod-subpath-test-downwardapi-w677" satisfied condition "Succeeded or Failed"
Sep 16 12:22:02.663: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-subpath-test-downwardapi-w677 container test-container-subpath-downwardapi-w677: <nil>
STEP: delete the pod
Sep 16 12:22:02.731: INFO: Waiting for pod pod-subpath-test-downwardapi-w677 to disappear
Sep 16 12:22:02.740: INFO: Pod pod-subpath-test-downwardapi-w677 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-w677
Sep 16 12:22:02.741: INFO: Deleting pod "pod-subpath-test-downwardapi-w677" in namespace "subpath-8297"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":151,"skipped":2880,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":152,"skipped":2888,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:22:13.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9117" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":153,"skipped":2894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 15 lines ...
• [SLOW TEST:5.315 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":154,"skipped":2946,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
• [SLOW TEST:7.496 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":155,"skipped":2976,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:22:26.038: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8746cebb-d875-4030-9948-8db91cce9842" in namespace "downward-api-8246" to be "Succeeded or Failed"
Sep 16 12:22:26.046: INFO: Pod "downwardapi-volume-8746cebb-d875-4030-9948-8db91cce9842": Phase="Pending", Reason="", readiness=false. Elapsed: 7.354164ms
Sep 16 12:22:28.170: INFO: Pod "downwardapi-volume-8746cebb-d875-4030-9948-8db91cce9842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.131970187s
STEP: Saw pod success
Sep 16 12:22:28.170: INFO: Pod "downwardapi-volume-8746cebb-d875-4030-9948-8db91cce9842" satisfied condition "Succeeded or Failed"
Sep 16 12:22:28.205: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-8746cebb-d875-4030-9948-8db91cce9842 container client-container: <nil>
STEP: delete the pod
Sep 16 12:22:28.304: INFO: Waiting for pod downwardapi-volume-8746cebb-d875-4030-9948-8db91cce9842 to disappear
Sep 16 12:22:28.331: INFO: Pod downwardapi-volume-8746cebb-d875-4030-9948-8db91cce9842 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:22:28.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8246" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":156,"skipped":2980,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should not schedule jobs when suspended [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 17 lines ...
• [SLOW TEST:300.186 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not schedule jobs when suspended [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":157,"skipped":2992,"failed":0}
SS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 16 12:27:28.614: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 16 12:27:28.724: INFO: Waiting up to 5m0s for pod "downward-api-39fe178f-4e45-4bf3-b30b-89691786ed07" in namespace "downward-api-5631" to be "Succeeded or Failed"
Sep 16 12:27:28.733: INFO: Pod "downward-api-39fe178f-4e45-4bf3-b30b-89691786ed07": Phase="Pending", Reason="", readiness=false. Elapsed: 8.561124ms
Sep 16 12:27:30.737: INFO: Pod "downward-api-39fe178f-4e45-4bf3-b30b-89691786ed07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01328799s
STEP: Saw pod success
Sep 16 12:27:30.737: INFO: Pod "downward-api-39fe178f-4e45-4bf3-b30b-89691786ed07" satisfied condition "Succeeded or Failed"
Sep 16 12:27:30.740: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downward-api-39fe178f-4e45-4bf3-b30b-89691786ed07 container dapi-container: <nil>
STEP: delete the pod
Sep 16 12:27:30.781: INFO: Waiting for pod downward-api-39fe178f-4e45-4bf3-b30b-89691786ed07 to disappear
Sep 16 12:27:30.784: INFO: Pod downward-api-39fe178f-4e45-4bf3-b30b-89691786ed07 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:27:30.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5631" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":158,"skipped":2994,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 34 lines ...
• [SLOW TEST:7.739 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":159,"skipped":3003,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:27:38.533: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 16 12:27:38.642: INFO: Waiting up to 5m0s for pod "pod-6d943ddf-b34a-4ac3-b164-a752429473a7" in namespace "emptydir-7032" to be "Succeeded or Failed"
Sep 16 12:27:38.649: INFO: Pod "pod-6d943ddf-b34a-4ac3-b164-a752429473a7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.299286ms
Sep 16 12:27:40.653: INFO: Pod "pod-6d943ddf-b34a-4ac3-b164-a752429473a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011527152s
STEP: Saw pod success
Sep 16 12:27:40.653: INFO: Pod "pod-6d943ddf-b34a-4ac3-b164-a752429473a7" satisfied condition "Succeeded or Failed"
Sep 16 12:27:40.656: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-6d943ddf-b34a-4ac3-b164-a752429473a7 container test-container: <nil>
STEP: delete the pod
Sep 16 12:27:40.677: INFO: Waiting for pod pod-6d943ddf-b34a-4ac3-b164-a752429473a7 to disappear
Sep 16 12:27:40.679: INFO: Pod pod-6d943ddf-b34a-4ac3-b164-a752429473a7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:27:40.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7032" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":160,"skipped":3006,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 20 lines ...
• [SLOW TEST:17.113 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":161,"skipped":3022,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] version v1
... skipping 38 lines ...
Sep 16 12:28:00.179: INFO: Starting http.Client for https://35.222.34.167/api/v1/namespaces/proxy-5331/services/test-service/proxy/some/path/with/PUT
Sep 16 12:28:00.184: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT
[AfterEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:00.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5331" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":162,"skipped":3031,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:04.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7883" for this suite.
STEP: Destroying namespace "webhook-7883-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":163,"skipped":3051,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-f82b6cb9-e9e0-4710-b25c-c950b96815f6
STEP: Creating a pod to test consume secrets
Sep 16 12:28:04.245: INFO: Waiting up to 5m0s for pod "pod-secrets-0de60a3d-93cd-41a2-88f6-5e958d5cb805" in namespace "secrets-5005" to be "Succeeded or Failed"
Sep 16 12:28:04.251: INFO: Pod "pod-secrets-0de60a3d-93cd-41a2-88f6-5e958d5cb805": Phase="Pending", Reason="", readiness=false. Elapsed: 5.654968ms
Sep 16 12:28:06.256: INFO: Pod "pod-secrets-0de60a3d-93cd-41a2-88f6-5e958d5cb805": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010627776s
STEP: Saw pod success
Sep 16 12:28:06.256: INFO: Pod "pod-secrets-0de60a3d-93cd-41a2-88f6-5e958d5cb805" satisfied condition "Succeeded or Failed"
Sep 16 12:28:06.258: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-secrets-0de60a3d-93cd-41a2-88f6-5e958d5cb805 container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 12:28:06.280: INFO: Waiting for pod pod-secrets-0de60a3d-93cd-41a2-88f6-5e958d5cb805 to disappear
Sep 16 12:28:06.284: INFO: Pod pod-secrets-0de60a3d-93cd-41a2-88f6-5e958d5cb805 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:06.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5005" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":164,"skipped":3055,"failed":0}
S
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:06.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3201" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":165,"skipped":3056,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-80fd62e5-b1fc-4c9f-abf4-f94ec2fd85a4
STEP: Creating a pod to test consume configMaps
Sep 16 12:28:06.510: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9d3aa55-33d8-4642-8108-da690b6a4838" in namespace "configmap-6727" to be "Succeeded or Failed"
Sep 16 12:28:06.516: INFO: Pod "pod-configmaps-c9d3aa55-33d8-4642-8108-da690b6a4838": Phase="Pending", Reason="", readiness=false. Elapsed: 6.3196ms
Sep 16 12:28:08.522: INFO: Pod "pod-configmaps-c9d3aa55-33d8-4642-8108-da690b6a4838": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012093649s
STEP: Saw pod success
Sep 16 12:28:08.522: INFO: Pod "pod-configmaps-c9d3aa55-33d8-4642-8108-da690b6a4838" satisfied condition "Succeeded or Failed"
Sep 16 12:28:08.526: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-configmaps-c9d3aa55-33d8-4642-8108-da690b6a4838 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 12:28:08.556: INFO: Waiting for pod pod-configmaps-c9d3aa55-33d8-4642-8108-da690b6a4838 to disappear
Sep 16 12:28:08.559: INFO: Pod pod-configmaps-c9d3aa55-33d8-4642-8108-da690b6a4838 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:08.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6727" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":166,"skipped":3061,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:8.260 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":167,"skipped":3086,"failed":0}
SS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 16 12:28:18.916: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:18.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-629" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":168,"skipped":3088,"failed":0}
SSS
------------------------------
[sig-node] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 12:28:19.009: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-4976bf93-d291-4a37-932f-35c6ba19792e" in namespace "security-context-test-5396" to be "Succeeded or Failed"
Sep 16 12:28:19.014: INFO: Pod "busybox-privileged-false-4976bf93-d291-4a37-932f-35c6ba19792e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.592884ms
Sep 16 12:28:21.018: INFO: Pod "busybox-privileged-false-4976bf93-d291-4a37-932f-35c6ba19792e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009017671s
Sep 16 12:28:21.018: INFO: Pod "busybox-privileged-false-4976bf93-d291-4a37-932f-35c6ba19792e" satisfied condition "Succeeded or Failed"
Sep 16 12:28:21.026: INFO: Got logs for pod "busybox-privileged-false-4976bf93-d291-4a37-932f-35c6ba19792e": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:21.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5396" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":169,"skipped":3091,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should have Endpoints and EndpointSlices pointing to API Server [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 9 lines ...
Sep 16 12:28:21.093: INFO: Endpoints addresses: [35.222.34.167] , ports: [443]
Sep 16 12:28:21.093: INFO: EndpointSlices addresses: [35.222.34.167] , ports: [443]
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:21.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4313" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":170,"skipped":3104,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Sep 16 12:28:21.947: INFO: created pod pod-service-account-nomountsa-nomountspec
Sep 16 12:28:21.947: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:21.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5684" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":346,"completed":171,"skipped":3133,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Sep 16 12:28:24.434: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
Sep 16 12:28:24.635: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:24.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9939" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":172,"skipped":3154,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:28.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5189" for this suite.
STEP: Destroying namespace "webhook-5189-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":173,"skipped":3157,"failed":0}

------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 6 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:28:33.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3665" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":174,"skipped":3157,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:150.561 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":175,"skipped":3191,"failed":0}
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-22cffb2f-985c-4921-8aa5-c860552d03d7
STEP: Creating a pod to test consume secrets
Sep 16 12:31:03.811: INFO: Waiting up to 5m0s for pod "pod-secrets-cfec03a6-6a63-4c2b-a6d7-36c504b65bef" in namespace "secrets-4624" to be "Succeeded or Failed"
Sep 16 12:31:03.817: INFO: Pod "pod-secrets-cfec03a6-6a63-4c2b-a6d7-36c504b65bef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.825959ms
Sep 16 12:31:05.826: INFO: Pod "pod-secrets-cfec03a6-6a63-4c2b-a6d7-36c504b65bef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014175001s
STEP: Saw pod success
Sep 16 12:31:05.826: INFO: Pod "pod-secrets-cfec03a6-6a63-4c2b-a6d7-36c504b65bef" satisfied condition "Succeeded or Failed"
Sep 16 12:31:05.831: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-secrets-cfec03a6-6a63-4c2b-a6d7-36c504b65bef container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 12:31:05.880: INFO: Waiting for pod pod-secrets-cfec03a6-6a63-4c2b-a6d7-36c504b65bef to disappear
Sep 16 12:31:05.885: INFO: Pod pod-secrets-cfec03a6-6a63-4c2b-a6d7-36c504b65bef no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:31:05.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4624" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":176,"skipped":3193,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398
    should be able to retrieve and filter logs  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":346,"completed":177,"skipped":3200,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 15 lines ...
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9232 to expose endpoints map[pod1:[80]]
Sep 16 12:31:14.358: INFO: successfully validated that service endpoint-test2 in namespace services-9232 exposes endpoints map[pod1:[80]]
STEP: Checking if the Service forwards traffic to pod1
Sep 16 12:31:14.358: INFO: Creating new exec pod
Sep 16 12:31:17.375: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 16 12:31:18.753: INFO: rc: 1
Sep 16 12:31:18.753: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:31:19.753: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 16 12:31:20.992: INFO: rc: 1
Sep 16 12:31:20.992: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:31:21.754: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 16 12:31:22.986: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n"
Sep 16 12:31:22.986: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 16 12:31:22.986: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.250.85 80'
... skipping 14 lines ...
STEP: Deleting pod pod1 in namespace services-9232
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-9232 to expose endpoints map[pod2:[80]]
Sep 16 12:31:26.826: INFO: successfully validated that service endpoint-test2 in namespace services-9232 exposes endpoints map[pod2:[80]]
STEP: Checking if the Service forwards traffic to pod2
Sep 16 12:31:27.826: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 16 12:31:30.078: INFO: rc: 1
Sep 16 12:31:30.078: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ nc -v -t -w 2 endpoint-test2 80
+ echo hostName
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:31:31.078: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 16 12:31:33.236: INFO: rc: 1
Sep 16 12:31:33.237: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:31:34.078: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 16 12:31:34.369: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n"
Sep 16 12:31:34.369: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 16 12:31:34.369: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-9232 exec execpodmjkd5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.250.85 80'
... skipping 12 lines ...
• [SLOW TEST:22.494 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":346,"completed":178,"skipped":3221,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 12:31:34.724: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 12:31:36.790: INFO: Deleting pod "var-expansion-01be7973-ebb8-4a62-aba2-5a7f8fe75ee3" in namespace "var-expansion-9475"
Sep 16 12:31:36.796: INFO: Wait up to 5m0s for pod "var-expansion-01be7973-ebb8-4a62-aba2-5a7f8fe75ee3" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:31:38.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9475" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":179,"skipped":3231,"failed":0}
SS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
... skipping 27 lines ...
• [SLOW TEST:5.205 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":180,"skipped":3233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Sep 16 12:31:46.880: INFO: stderr: ""
Sep 16 12:31:46.880: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:31:46.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4500" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":346,"completed":181,"skipped":3290,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:31:48.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9168" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":182,"skipped":3316,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:31:48.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3263" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":346,"completed":183,"skipped":3330,"failed":0}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 12:31:48.355: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 5 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 16 12:31:49.040: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 16 12:31:52.073: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:31:52.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1212" for this suite.
STEP: Destroying namespace "webhook-1212-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":184,"skipped":3330,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 16 12:31:52.546: INFO: Asynchronously running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-5779 proxy --unix-socket=/tmp/kubectl-proxy-unix334142059/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:31:52.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5779" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":346,"completed":185,"skipped":3421,"failed":0}

------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Sep 16 12:31:56.774: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:31:56.780: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:31:56.875: INFO: Unable to read jessie_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:31:56.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:31:56.888: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:31:56.893: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:31:56.990: INFO: Lookups using dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195 failed for: [wheezy_udp@dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_udp@dns-test-service.dns-1741.svc.cluster.local jessie_tcp@dns-test-service.dns-1741.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local]

Sep 16 12:32:01.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.003: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.009: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.015: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.168: INFO: Unable to read jessie_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.174: INFO: Unable to read jessie_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.181: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:02.269: INFO: Lookups using dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195 failed for: [wheezy_udp@dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_udp@dns-test-service.dns-1741.svc.cluster.local jessie_tcp@dns-test-service.dns-1741.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local]

Sep 16 12:32:06.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.004: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.010: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.016: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.068: INFO: Unable to read jessie_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.169: INFO: Unable to read jessie_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.176: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.183: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:07.205: INFO: Lookups using dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195 failed for: [wheezy_udp@dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_udp@dns-test-service.dns-1741.svc.cluster.local jessie_tcp@dns-test-service.dns-1741.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local]

Sep 16 12:32:11.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.002: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.007: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.011: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.074: INFO: Unable to read jessie_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.078: INFO: Unable to read jessie_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.083: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.087: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:12.185: INFO: Lookups using dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195 failed for: [wheezy_udp@dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_udp@dns-test-service.dns-1741.svc.cluster.local jessie_tcp@dns-test-service.dns-1741.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local]

Sep 16 12:32:16.998: INFO: Unable to read wheezy_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.004: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.026: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.079: INFO: Unable to read jessie_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.084: INFO: Unable to read jessie_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.089: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.098: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:17.288: INFO: Lookups using dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195 failed for: [wheezy_udp@dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_udp@dns-test-service.dns-1741.svc.cluster.local jessie_tcp@dns-test-service.dns-1741.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local]

Sep 16 12:32:22.004: INFO: Unable to read wheezy_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.011: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.017: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.025: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.068: INFO: Unable to read jessie_udp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.074: INFO: Unable to read jessie_tcp@dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.080: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.085: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local from pod dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195: the server could not find the requested resource (get pods dns-test-d2529e49-990d-42ac-a9a3-20b933775195)
Sep 16 12:32:22.168: INFO: Lookups using dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195 failed for: [wheezy_udp@dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@dns-test-service.dns-1741.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_udp@dns-test-service.dns-1741.svc.cluster.local jessie_tcp@dns-test-service.dns-1741.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1741.svc.cluster.local]

Sep 16 12:32:27.195: INFO: DNS probes using dns-1741/dns-test-d2529e49-990d-42ac-a9a3-20b933775195 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:34.790 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":346,"completed":186,"skipped":3421,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 12:32:27.501: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:32:29.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5946" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":346,"completed":187,"skipped":3438,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 12:32:29.722: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-391a974d-5675-4a55-abb5-15f3b467b6ef
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:32:30.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5974" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":188,"skipped":3448,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:243.404 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":189,"skipped":3470,"failed":0}
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:36:34.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6965f88-cb14-4992-9fcc-b564563609a9" in namespace "downward-api-294" to be "Succeeded or Failed"
Sep 16 12:36:34.218: INFO: Pod "downwardapi-volume-a6965f88-cb14-4992-9fcc-b564563609a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.890502ms
Sep 16 12:36:36.223: INFO: Pod "downwardapi-volume-a6965f88-cb14-4992-9fcc-b564563609a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009793283s
STEP: Saw pod success
Sep 16 12:36:36.223: INFO: Pod "downwardapi-volume-a6965f88-cb14-4992-9fcc-b564563609a9" satisfied condition "Succeeded or Failed"
Sep 16 12:36:36.225: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-a6965f88-cb14-4992-9fcc-b564563609a9 container client-container: <nil>
STEP: delete the pod
Sep 16 12:36:36.268: INFO: Waiting for pod downwardapi-volume-a6965f88-cb14-4992-9fcc-b564563609a9 to disappear
Sep 16 12:36:36.272: INFO: Pod downwardapi-volume-a6965f88-cb14-4992-9fcc-b564563609a9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:36:36.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-294" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":190,"skipped":3470,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:36:36.343: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc942e68-d714-4cb8-9ad1-36b97991c2fb" in namespace "projected-7568" to be "Succeeded or Failed"
Sep 16 12:36:36.346: INFO: Pod "downwardapi-volume-fc942e68-d714-4cb8-9ad1-36b97991c2fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.127981ms
Sep 16 12:36:38.351: INFO: Pod "downwardapi-volume-fc942e68-d714-4cb8-9ad1-36b97991c2fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008704424s
STEP: Saw pod success
Sep 16 12:36:38.352: INFO: Pod "downwardapi-volume-fc942e68-d714-4cb8-9ad1-36b97991c2fb" satisfied condition "Succeeded or Failed"
Sep 16 12:36:38.356: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-fc942e68-d714-4cb8-9ad1-36b97991c2fb container client-container: <nil>
STEP: delete the pod
Sep 16 12:36:38.382: INFO: Waiting for pod downwardapi-volume-fc942e68-d714-4cb8-9ad1-36b97991c2fb to disappear
Sep 16 12:36:38.386: INFO: Pod downwardapi-volume-fc942e68-d714-4cb8-9ad1-36b97991c2fb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:36:38.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7568" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":191,"skipped":3470,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 16 12:36:41.500: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:36:41.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2073" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":192,"skipped":3480,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 32 lines ...
• [SLOW TEST:6.084 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":193,"skipped":3496,"failed":0}
SSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] LimitRange
... skipping 38 lines ...
• [SLOW TEST:7.290 seconds]
[sig-scheduling] LimitRange
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":194,"skipped":3501,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 18 lines ...
• [SLOW TEST:17.457 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":195,"skipped":3503,"failed":0}
SSSS
------------------------------
[sig-node] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] KubeletManagedEtcHosts
... skipping 53 lines ...
• [SLOW TEST:6.216 seconds]
[sig-node] KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":196,"skipped":3507,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Sep 16 12:37:20.695: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:37:20.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8718" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":197,"skipped":3511,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:52.326 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":198,"skipped":3531,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:38:13.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2265" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":199,"skipped":3542,"failed":0}
SSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount projected service account token [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 2 lines ...
Sep 16 12:38:13.103: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep 16 12:38:13.238: INFO: Waiting up to 5m0s for pod "test-pod-1b6b1c32-a103-49da-b2e8-2a6763a3d8f9" in namespace "svcaccounts-5607" to be "Succeeded or Failed"
Sep 16 12:38:13.257: INFO: Pod "test-pod-1b6b1c32-a103-49da-b2e8-2a6763a3d8f9": Phase="Pending", Reason="", readiness=false. Elapsed: 18.879617ms
Sep 16 12:38:15.289: INFO: Pod "test-pod-1b6b1c32-a103-49da-b2e8-2a6763a3d8f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.050633753s
STEP: Saw pod success
Sep 16 12:38:15.289: INFO: Pod "test-pod-1b6b1c32-a103-49da-b2e8-2a6763a3d8f9" satisfied condition "Succeeded or Failed"
Sep 16 12:38:15.292: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod test-pod-1b6b1c32-a103-49da-b2e8-2a6763a3d8f9 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 12:38:15.330: INFO: Waiting for pod test-pod-1b6b1c32-a103-49da-b2e8-2a6763a3d8f9 to disappear
Sep 16 12:38:15.334: INFO: Pod test-pod-1b6b1c32-a103-49da-b2e8-2a6763a3d8f9 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:38:15.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-5607" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":200,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 17 lines ...
• [SLOW TEST:17.021 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":201,"skipped":3574,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:38:36.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-50" for this suite.
STEP: Destroying namespace "webhook-50-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":202,"skipped":3635,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:38:36.180: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 16 12:38:36.243: INFO: Waiting up to 5m0s for pod "pod-db698e8a-2a29-4259-96cc-c4790bf906c2" in namespace "emptydir-570" to be "Succeeded or Failed"
Sep 16 12:38:36.249: INFO: Pod "pod-db698e8a-2a29-4259-96cc-c4790bf906c2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.396545ms
Sep 16 12:38:38.254: INFO: Pod "pod-db698e8a-2a29-4259-96cc-c4790bf906c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011040365s
STEP: Saw pod success
Sep 16 12:38:38.254: INFO: Pod "pod-db698e8a-2a29-4259-96cc-c4790bf906c2" satisfied condition "Succeeded or Failed"
Sep 16 12:38:38.257: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-db698e8a-2a29-4259-96cc-c4790bf906c2 container test-container: <nil>
STEP: delete the pod
Sep 16 12:38:38.322: INFO: Waiting for pod pod-db698e8a-2a29-4259-96cc-c4790bf906c2 to disappear
Sep 16 12:38:38.341: INFO: Pod pod-db698e8a-2a29-4259-96cc-c4790bf906c2 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:38:38.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-570" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":203,"skipped":3643,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:38:42.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6655" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":346,"completed":204,"skipped":3650,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 160 lines ...
Sep 16 12:38:43.817: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-2778 create -f -'
Sep 16 12:38:44.166: INFO: stderr: ""
Sep 16 12:38:44.166: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Sep 16 12:38:44.167: INFO: Waiting for all frontend pods to be Running.
Sep 16 12:38:49.219: INFO: Waiting for frontend to serve content.
Sep 16 12:38:50.262: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Sep 16 12:38:55.361: INFO: Trying to add a new entry to the guestbook.
Sep 16 12:38:55.379: INFO: Verifying that added entry can be retrieved.
Sep 16 12:38:55.392: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Sep 16 12:39:00.409: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-2778 delete --grace-period=0 --force -f -'
Sep 16 12:39:00.545: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 16 12:39:00.545: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Sep 16 12:39:00.545: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-2778 delete --grace-period=0 --force -f -'
... skipping 25 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":346,"completed":205,"skipped":3660,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:39:01.058: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 16 12:39:01.141: INFO: Waiting up to 5m0s for pod "pod-00ceab0d-cbc4-4945-9721-2ffcc3bb20b1" in namespace "emptydir-7226" to be "Succeeded or Failed"
Sep 16 12:39:01.146: INFO: Pod "pod-00ceab0d-cbc4-4945-9721-2ffcc3bb20b1": Phase="Pending", Reason="", readiness=false. Elapsed: 5.245754ms
Sep 16 12:39:03.153: INFO: Pod "pod-00ceab0d-cbc4-4945-9721-2ffcc3bb20b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011970474s
STEP: Saw pod success
Sep 16 12:39:03.153: INFO: Pod "pod-00ceab0d-cbc4-4945-9721-2ffcc3bb20b1" satisfied condition "Succeeded or Failed"
Sep 16 12:39:03.157: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-00ceab0d-cbc4-4945-9721-2ffcc3bb20b1 container test-container: <nil>
STEP: delete the pod
Sep 16 12:39:03.206: INFO: Waiting for pod pod-00ceab0d-cbc4-4945-9721-2ffcc3bb20b1 to disappear
Sep 16 12:39:03.214: INFO: Pod pod-00ceab0d-cbc4-4945-9721-2ffcc3bb20b1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:39:03.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7226" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":206,"skipped":3690,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should test the lifecycle of a ReplicationController [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 26 lines ...
STEP: deleting ReplicationControllers by collection
STEP: waiting for ReplicationController to have a DELETED watchEvent
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:39:06.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2318" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":207,"skipped":3713,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 51 lines ...
• [SLOW TEST:40.806 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":208,"skipped":3717,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PreStop
... skipping 32 lines ...
• [SLOW TEST:9.202 seconds]
[sig-node] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":346,"completed":209,"skipped":3741,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:39:56.399: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 16 12:39:56.456: INFO: Waiting up to 5m0s for pod "pod-20ebd9b0-7c2e-4879-919d-47f95fdc1784" in namespace "emptydir-4080" to be "Succeeded or Failed"
Sep 16 12:39:56.466: INFO: Pod "pod-20ebd9b0-7c2e-4879-919d-47f95fdc1784": Phase="Pending", Reason="", readiness=false. Elapsed: 9.801774ms
Sep 16 12:39:58.472: INFO: Pod "pod-20ebd9b0-7c2e-4879-919d-47f95fdc1784": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015961758s
STEP: Saw pod success
Sep 16 12:39:58.472: INFO: Pod "pod-20ebd9b0-7c2e-4879-919d-47f95fdc1784" satisfied condition "Succeeded or Failed"
Sep 16 12:39:58.483: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-20ebd9b0-7c2e-4879-919d-47f95fdc1784 container test-container: <nil>
STEP: delete the pod
Sep 16 12:39:58.511: INFO: Waiting for pod pod-20ebd9b0-7c2e-4879-919d-47f95fdc1784 to disappear
Sep 16 12:39:58.518: INFO: Pod pod-20ebd9b0-7c2e-4879-919d-47f95fdc1784 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:39:58.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4080" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":210,"skipped":3749,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:39:58.600: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4cdefcbf-32ba-4cdd-8dd4-f7831c89ca89" in namespace "downward-api-343" to be "Succeeded or Failed"
Sep 16 12:39:58.607: INFO: Pod "downwardapi-volume-4cdefcbf-32ba-4cdd-8dd4-f7831c89ca89": Phase="Pending", Reason="", readiness=false. Elapsed: 7.541238ms
Sep 16 12:40:00.654: INFO: Pod "downwardapi-volume-4cdefcbf-32ba-4cdd-8dd4-f7831c89ca89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.054788175s
STEP: Saw pod success
Sep 16 12:40:00.655: INFO: Pod "downwardapi-volume-4cdefcbf-32ba-4cdd-8dd4-f7831c89ca89" satisfied condition "Succeeded or Failed"
Sep 16 12:40:00.658: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-4cdefcbf-32ba-4cdd-8dd4-f7831c89ca89 container client-container: <nil>
STEP: delete the pod
Sep 16 12:40:00.678: INFO: Waiting for pod downwardapi-volume-4cdefcbf-32ba-4cdd-8dd4-f7831c89ca89 to disappear
Sep 16 12:40:00.683: INFO: Pod downwardapi-volume-4cdefcbf-32ba-4cdd-8dd4-f7831c89ca89 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:00.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-343" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":211,"skipped":3752,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-ebc7e5f0-9b8a-4620-aa8a-de34a0f3e027
STEP: Creating a pod to test consume configMaps
Sep 16 12:40:00.757: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e55bb9b-7bf8-4cb8-8810-057481086f75" in namespace "configmap-7219" to be "Succeeded or Failed"
Sep 16 12:40:00.765: INFO: Pod "pod-configmaps-2e55bb9b-7bf8-4cb8-8810-057481086f75": Phase="Pending", Reason="", readiness=false. Elapsed: 7.724862ms
Sep 16 12:40:02.772: INFO: Pod "pod-configmaps-2e55bb9b-7bf8-4cb8-8810-057481086f75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014932083s
STEP: Saw pod success
Sep 16 12:40:02.772: INFO: Pod "pod-configmaps-2e55bb9b-7bf8-4cb8-8810-057481086f75" satisfied condition "Succeeded or Failed"
Sep 16 12:40:02.778: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-configmaps-2e55bb9b-7bf8-4cb8-8810-057481086f75 container configmap-volume-test: <nil>
STEP: delete the pod
Sep 16 12:40:02.817: INFO: Waiting for pod pod-configmaps-2e55bb9b-7bf8-4cb8-8810-057481086f75 to disappear
Sep 16 12:40:02.830: INFO: Pod pod-configmaps-2e55bb9b-7bf8-4cb8-8810-057481086f75 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:02.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7219" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":212,"skipped":3753,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 14 lines ...
STEP: Creating secret with name s-test-opt-create-c441a6aa-0962-4710-84eb-6e0117d6f9d2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:07.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-755" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":213,"skipped":3759,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:40:07.318: INFO: Waiting up to 5m0s for pod "downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc" in namespace "projected-1831" to be "Succeeded or Failed"
Sep 16 12:40:07.336: INFO: Pod "downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.713755ms
Sep 16 12:40:09.340: INFO: Pod "downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc": Phase="Running", Reason="", readiness=true. Elapsed: 2.022009508s
Sep 16 12:40:11.345: INFO: Pod "downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027757656s
STEP: Saw pod success
Sep 16 12:40:11.345: INFO: Pod "downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc" satisfied condition "Succeeded or Failed"
Sep 16 12:40:11.349: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc container client-container: <nil>
STEP: delete the pod
Sep 16 12:40:11.390: INFO: Waiting for pod downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc to disappear
Sep 16 12:40:11.394: INFO: Pod downwardapi-volume-87815d31-3135-4f21-9723-00edd54461fc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:11.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1831" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":214,"skipped":3809,"failed":0}

------------------------------
[sig-apps] Deployment 
  should validate Deployment Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 62 lines ...
Sep 16 12:40:13.594: INFO: Pod "test-deployment-nrd7s-d9bb78c49-24lg9" is available:
&Pod{ObjectMeta:{test-deployment-nrd7s-d9bb78c49-24lg9 test-deployment-nrd7s-d9bb78c49- deployment-327  adcadafb-fb15-4557-b5da-b69a21fdb803 16293 0 2021-09-16 12:40:11 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:d9bb78c49] map[] [{apps/v1 ReplicaSet test-deployment-nrd7s-d9bb78c49 77b2d6e2-f0ed-434a-86cf-41ab07affde6 0xc0050ed350 0xc0050ed351}] []  [{kube-controller-manager Update v1 2021-09-16 12:40:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77b2d6e2-f0ed-434a-86cf-41ab07affde6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 12:40:13 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.81\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-r877n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-r877n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-lhnl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:40:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:40:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:40:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:40:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:10.64.3.81,StartTime:2021-09-16 12:40:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-16 12:40:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0cfabd6bddfde2bbae34d4f7eaf61d7fb05c57fe1fb53f1fdc478e004effc0e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.81,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:13.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-327" for this suite.
•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":215,"skipped":3809,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:40:13.616: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 16 12:40:13.747: INFO: Waiting up to 5m0s for pod "pod-673d5f22-eae3-44d8-836a-506ca6f29b13" in namespace "emptydir-4583" to be "Succeeded or Failed"
Sep 16 12:40:13.753: INFO: Pod "pod-673d5f22-eae3-44d8-836a-506ca6f29b13": Phase="Pending", Reason="", readiness=false. Elapsed: 6.748666ms
Sep 16 12:40:15.758: INFO: Pod "pod-673d5f22-eae3-44d8-836a-506ca6f29b13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011235404s
STEP: Saw pod success
Sep 16 12:40:15.758: INFO: Pod "pod-673d5f22-eae3-44d8-836a-506ca6f29b13" satisfied condition "Succeeded or Failed"
Sep 16 12:40:15.761: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-673d5f22-eae3-44d8-836a-506ca6f29b13 container test-container: <nil>
STEP: delete the pod
Sep 16 12:40:15.820: INFO: Waiting for pod pod-673d5f22-eae3-44d8-836a-506ca6f29b13 to disappear
Sep 16 12:40:15.825: INFO: Pod pod-673d5f22-eae3-44d8-836a-506ca6f29b13 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:15.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4583" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":216,"skipped":3811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:20.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5173" for this suite.
•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":217,"skipped":3847,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces 
  should list and delete a collection of PodDisruptionBudgets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 24 lines ...
Sep 16 12:40:22.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2-4637" for this suite.
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:22.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-7055" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":218,"skipped":3858,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 40 lines ...
Sep 16 12:40:27.796: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=crd-publish-openapi-2357 explain e2e-test-crd-publish-openapi-5939-crds.spec'
Sep 16 12:40:28.009: INFO: stderr: ""
Sep 16 12:40:28.009: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-5939-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep 16 12:40:28.009: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=crd-publish-openapi-2357 explain e2e-test-crd-publish-openapi-5939-crds.spec.bars'
Sep 16 12:40:28.223: INFO: stderr: ""
Sep 16 12:40:28.223: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-5939-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep 16 12:40:28.223: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=crd-publish-openapi-2357 explain e2e-test-crd-publish-openapi-5939-crds.spec.bars2'
Sep 16 12:40:28.412: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:32.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2357" for this suite.

• [SLOW TEST:9.785 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":219,"skipped":3910,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should delete a collection of pod templates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PodTemplates
... skipping 14 lines ...
STEP: check that the list of pod templates matches the requested quantity
Sep 16 12:40:32.497: INFO: requesting list of pod templates to confirm quantity
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:32.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-3481" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":220,"skipped":3940,"failed":0}
SS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 12:40:32.512: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:40.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1752" for this suite.

• [SLOW TEST:8.084 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":221,"skipped":3942,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
• [SLOW TEST:8.611 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":222,"skipped":3984,"failed":0}
SSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 11 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:50.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2692" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":223,"skipped":3988,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 16 12:40:50.182: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 16 12:40:50.254: INFO: Waiting up to 5m0s for pod "downward-api-4dc26114-73de-4b44-af53-ce821381e0c9" in namespace "downward-api-9071" to be "Succeeded or Failed"
Sep 16 12:40:50.263: INFO: Pod "downward-api-4dc26114-73de-4b44-af53-ce821381e0c9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.672632ms
Sep 16 12:40:52.268: INFO: Pod "downward-api-4dc26114-73de-4b44-af53-ce821381e0c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013928987s
STEP: Saw pod success
Sep 16 12:40:52.268: INFO: Pod "downward-api-4dc26114-73de-4b44-af53-ce821381e0c9" satisfied condition "Succeeded or Failed"
Sep 16 12:40:52.271: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downward-api-4dc26114-73de-4b44-af53-ce821381e0c9 container dapi-container: <nil>
STEP: delete the pod
Sep 16 12:40:52.296: INFO: Waiting for pod downward-api-4dc26114-73de-4b44-af53-ce821381e0c9 to disappear
Sep 16 12:40:52.299: INFO: Pod downward-api-4dc26114-73de-4b44-af53-ce821381e0c9 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:40:52.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9071" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":224,"skipped":3989,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-ktx8
STEP: Creating a pod to test atomic-volume-subpath
Sep 16 12:40:52.390: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-ktx8" in namespace "subpath-2958" to be "Succeeded or Failed"
Sep 16 12:40:52.403: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.722451ms
Sep 16 12:40:54.407: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 2.017448553s
Sep 16 12:40:56.413: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 4.022637982s
Sep 16 12:40:58.419: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 6.028760168s
Sep 16 12:41:00.423: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 8.032980054s
Sep 16 12:41:02.427: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 10.037228083s
... skipping 2 lines ...
Sep 16 12:41:08.442: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 16.052412007s
Sep 16 12:41:10.448: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 18.058550283s
Sep 16 12:41:12.453: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 20.063525049s
Sep 16 12:41:14.458: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Running", Reason="", readiness=true. Elapsed: 22.068232385s
Sep 16 12:41:16.463: INFO: Pod "pod-subpath-test-projected-ktx8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073204078s
STEP: Saw pod success
Sep 16 12:41:16.463: INFO: Pod "pod-subpath-test-projected-ktx8" satisfied condition "Succeeded or Failed"
Sep 16 12:41:16.466: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-subpath-test-projected-ktx8 container test-container-subpath-projected-ktx8: <nil>
STEP: delete the pod
Sep 16 12:41:16.498: INFO: Waiting for pod pod-subpath-test-projected-ktx8 to disappear
Sep 16 12:41:16.502: INFO: Pod pod-subpath-test-projected-ktx8 no longer exists
STEP: Deleting pod pod-subpath-test-projected-ktx8
Sep 16 12:41:16.502: INFO: Deleting pod "pod-subpath-test-projected-ktx8" in namespace "subpath-2958"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":225,"skipped":4014,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 12:41:16.571: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-72f5dabd-33cf-4d92-9b4f-4576630e2b0b" in namespace "security-context-test-4094" to be "Succeeded or Failed"
Sep 16 12:41:16.578: INFO: Pod "busybox-readonly-false-72f5dabd-33cf-4d92-9b4f-4576630e2b0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.912372ms
Sep 16 12:41:18.583: INFO: Pod "busybox-readonly-false-72f5dabd-33cf-4d92-9b4f-4576630e2b0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011720272s
Sep 16 12:41:18.583: INFO: Pod "busybox-readonly-false-72f5dabd-33cf-4d92-9b4f-4576630e2b0b" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:41:18.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4094" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":226,"skipped":4034,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] version v1
... skipping 344 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":346,"completed":227,"skipped":4053,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes control plane services is included in cluster-info  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Sep 16 12:41:24.796: INFO: stderr: ""
Sep 16 12:41:24.796: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://35.222.34.167\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:41:24.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9524" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":346,"completed":228,"skipped":4060,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 41 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PreemptionExecutionPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451
    runs ReplicaSets to verify preemption running path [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":229,"skipped":4079,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events API
... skipping 12 lines ...
Sep 16 12:42:50.293: INFO: requesting DeleteCollection of events
STEP: check that the list of events matches the requested quantity
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:42:50.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7728" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":230,"skipped":4117,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-nodeport in namespace services-1265
I0916 12:42:50.428021   96838 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-1265, replica count: 3
I0916 12:42:53.479680   96838 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 16 12:42:53.498: INFO: Creating new exec pod
Sep 16 12:42:56.607: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1265 exec execpod-affinitylrntv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 16 12:42:57.950: INFO: rc: 1
Sep 16 12:42:57.950: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1265 exec execpod-affinitylrntv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:42:58.950: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1265 exec execpod-affinitylrntv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 16 12:43:00.190: INFO: rc: 1
Sep 16 12:43:00.190: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1265 exec execpod-affinitylrntv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:43:00.950: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1265 exec execpod-affinitylrntv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 16 12:43:01.182: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n"
Sep 16 12:43:01.182: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 16 12:43:01.182: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1265 exec execpod-affinitylrntv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.8.201 80'
... skipping 38 lines ...
• [SLOW TEST:14.795 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":231,"skipped":4128,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should patch a secret [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:43:05.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6984" for this suite.
•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":232,"skipped":4160,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 29 lines ...
• [SLOW TEST:6.595 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":233,"skipped":4169,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:43:15.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6585" for this suite.
STEP: Destroying namespace "webhook-6585-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":234,"skipped":4185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:8.871 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":235,"skipped":4231,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-3155fba5-ed3b-4538-99c3-3578253311c4
STEP: Creating a pod to test consume configMaps
Sep 16 12:43:25.201: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ac9993b9-f9f9-4987-a1d7-79ad83fe69f0" in namespace "projected-9734" to be "Succeeded or Failed"
Sep 16 12:43:25.221: INFO: Pod "pod-projected-configmaps-ac9993b9-f9f9-4987-a1d7-79ad83fe69f0": Phase="Pending", Reason="", readiness=false. Elapsed: 19.727667ms
Sep 16 12:43:27.225: INFO: Pod "pod-projected-configmaps-ac9993b9-f9f9-4987-a1d7-79ad83fe69f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023812898s
STEP: Saw pod success
Sep 16 12:43:27.225: INFO: Pod "pod-projected-configmaps-ac9993b9-f9f9-4987-a1d7-79ad83fe69f0" satisfied condition "Succeeded or Failed"
Sep 16 12:43:27.227: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-configmaps-ac9993b9-f9f9-4987-a1d7-79ad83fe69f0 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 12:43:27.265: INFO: Waiting for pod pod-projected-configmaps-ac9993b9-f9f9-4987-a1d7-79ad83fe69f0 to disappear
Sep 16 12:43:27.269: INFO: Pod pod-projected-configmaps-ac9993b9-f9f9-4987-a1d7-79ad83fe69f0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:43:27.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9734" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":236,"skipped":4233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Sep 16 12:43:27.323: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:43:31.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-612" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":237,"skipped":4267,"failed":0}
SSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 14 lines ...
STEP: Creating secret with name s-test-opt-create-4661eb96-a92e-4b84-a377-95fbe5d8461f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:43:35.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4673" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":238,"skipped":4270,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:22.198 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":239,"skipped":4311,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context 
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 2 lines ...
Sep 16 12:43:57.467: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 16 12:43:57.535: INFO: Waiting up to 5m0s for pod "security-context-a75c5959-6188-4f11-8c50-c9bf7850ee58" in namespace "security-context-8932" to be "Succeeded or Failed"
Sep 16 12:43:57.545: INFO: Pod "security-context-a75c5959-6188-4f11-8c50-c9bf7850ee58": Phase="Pending", Reason="", readiness=false. Elapsed: 9.671357ms
Sep 16 12:43:59.550: INFO: Pod "security-context-a75c5959-6188-4f11-8c50-c9bf7850ee58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014449721s
STEP: Saw pod success
Sep 16 12:43:59.550: INFO: Pod "security-context-a75c5959-6188-4f11-8c50-c9bf7850ee58" satisfied condition "Succeeded or Failed"
Sep 16 12:43:59.552: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod security-context-a75c5959-6188-4f11-8c50-c9bf7850ee58 container test-container: <nil>
STEP: delete the pod
Sep 16 12:43:59.581: INFO: Waiting for pod security-context-a75c5959-6188-4f11-8c50-c9bf7850ee58 to disappear
Sep 16 12:43:59.584: INFO: Pod security-context-a75c5959-6188-4f11-8c50-c9bf7850ee58 no longer exists
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:43:59.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-8932" for this suite.
•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":240,"skipped":4327,"failed":0}
SS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:43:59.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-8173" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":241,"skipped":4329,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Sep 16 12:44:02.236: INFO: stderr: ""
Sep 16 12:44:02.236: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:44:02.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6901" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":346,"completed":242,"skipped":4330,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 15 lines ...
• [SLOW TEST:18.186 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":243,"skipped":4359,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-13f60666-9473-4603-a3cd-0759910bf49c
STEP: Creating a pod to test consume secrets
Sep 16 12:44:20.507: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-96cd15c0-9303-409d-ad9a-6ba1c1d87aa0" in namespace "projected-4248" to be "Succeeded or Failed"
Sep 16 12:44:20.513: INFO: Pod "pod-projected-secrets-96cd15c0-9303-409d-ad9a-6ba1c1d87aa0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.007889ms
Sep 16 12:44:22.519: INFO: Pod "pod-projected-secrets-96cd15c0-9303-409d-ad9a-6ba1c1d87aa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011349123s
STEP: Saw pod success
Sep 16 12:44:22.519: INFO: Pod "pod-projected-secrets-96cd15c0-9303-409d-ad9a-6ba1c1d87aa0" satisfied condition "Succeeded or Failed"
Sep 16 12:44:22.522: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-secrets-96cd15c0-9303-409d-ad9a-6ba1c1d87aa0 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 12:44:22.547: INFO: Waiting for pod pod-projected-secrets-96cd15c0-9303-409d-ad9a-6ba1c1d87aa0 to disappear
Sep 16 12:44:22.552: INFO: Pod pod-projected-secrets-96cd15c0-9303-409d-ad9a-6ba1c1d87aa0 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:44:22.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4248" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":244,"skipped":4387,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 64 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Trying to schedule Pod with nonempty NodeSelector.
I0916 12:45:24.344736    2874 boskos.go:86] Sending heartbeat to Boskos
I0916 12:50:24.366553    2874 boskos.go:86] Sending heartbeat to Boskos
Sep 16 12:54:24.173: INFO: Timed out waiting for the following pods to schedule
Sep 16 12:54:24.173: INFO: kube-system/konnectivity-agent-85q4n
Sep 16 12:54:24.173: FAIL: Timed out after 10m0s waiting for stable cluster.

Full Stack Trace
k8s.io/kubernetes/test/e2e/scheduling.glob..func4.6()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436 +0x85
k8s.io/kubernetes/test/e2e.RunE2ETests(0x229aa57)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:128 +0x697
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 16 12:54:24.173: Timed out after 10m0s waiting for stable cluster.

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436
------------------------------
{"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":346,"completed":244,"skipped":4388,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 12:54:24.771: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 16 12:54:24.837: INFO: Waiting up to 5m0s for pod "pod-0e12134c-500d-44ae-b79a-e11b579616e7" in namespace "emptydir-1091" to be "Succeeded or Failed"
Sep 16 12:54:24.847: INFO: Pod "pod-0e12134c-500d-44ae-b79a-e11b579616e7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.58304ms
Sep 16 12:54:26.854: INFO: Pod "pod-0e12134c-500d-44ae-b79a-e11b579616e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016482789s
STEP: Saw pod success
Sep 16 12:54:26.854: INFO: Pod "pod-0e12134c-500d-44ae-b79a-e11b579616e7" satisfied condition "Succeeded or Failed"
Sep 16 12:54:26.860: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-0e12134c-500d-44ae-b79a-e11b579616e7 container test-container: <nil>
STEP: delete the pod
Sep 16 12:54:26.884: INFO: Waiting for pod pod-0e12134c-500d-44ae-b79a-e11b579616e7 to disappear
Sep 16 12:54:26.888: INFO: Pod pod-0e12134c-500d-44ae-b79a-e11b579616e7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:54:26.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1091" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":245,"skipped":4414,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 18 lines ...
• [SLOW TEST:6.222 seconds]
[sig-storage] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":246,"skipped":4423,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 14 lines ...
STEP: Creating configMap with name cm-test-opt-create-6cf0eb98-2eab-4472-a367-8666dbbf2df6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:54:37.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-205" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":247,"skipped":4425,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-73b794a9-21e2-4c74-9f29-93ed2eef21e1
STEP: Creating a pod to test consume secrets
Sep 16 12:54:37.593: INFO: Waiting up to 5m0s for pod "pod-secrets-3ecaf591-664e-4203-a60c-89b79fd88ad6" in namespace "secrets-6756" to be "Succeeded or Failed"
Sep 16 12:54:37.602: INFO: Pod "pod-secrets-3ecaf591-664e-4203-a60c-89b79fd88ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.690472ms
Sep 16 12:54:39.606: INFO: Pod "pod-secrets-3ecaf591-664e-4203-a60c-89b79fd88ad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012893973s
STEP: Saw pod success
Sep 16 12:54:39.606: INFO: Pod "pod-secrets-3ecaf591-664e-4203-a60c-89b79fd88ad6" satisfied condition "Succeeded or Failed"
Sep 16 12:54:39.610: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-secrets-3ecaf591-664e-4203-a60c-89b79fd88ad6 container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 12:54:39.634: INFO: Waiting for pod pod-secrets-3ecaf591-664e-4203-a60c-89b79fd88ad6 to disappear
Sep 16 12:54:39.638: INFO: Pod pod-secrets-3ecaf591-664e-4203-a60c-89b79fd88ad6 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:54:39.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6756" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":248,"skipped":4430,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 5 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:54:39.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7889" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":249,"skipped":4436,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Sep 16 12:54:42.426: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 16 12:54:42.426: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-7486 describe pod agnhost-primary-m26c6'
Sep 16 12:54:42.566: INFO: stderr: ""
Sep 16 12:54:42.566: INFO: stdout: "Name:         agnhost-primary-m26c6\nNamespace:    kubectl-7486\nPriority:     0\nNode:         kt2-5be7f4b0-16de-minion-group-lhnl/10.128.0.5\nStart Time:   Thu, 16 Sep 2021 12:54:40 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.64.3.87\nIPs:\n  IP:           10.64.3.87\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://2999b1c18b28d1f410778ff18c25d325d9454e5d940fb240fb071f3a67be4cbb\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 16 Sep 2021 12:54:41 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nwsrk (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-nwsrk:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-7486/agnhost-primary-m26c6 to kt2-5be7f4b0-16de-minion-group-lhnl\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
Sep 16 12:54:42.566: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-7486 describe rc agnhost-primary'
Sep 16 12:54:42.696: INFO: stderr: ""
Sep 16 12:54:42.696: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-7486\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-primary-m26c6\n"
Sep 16 12:54:42.696: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-7486 describe service agnhost-primary'
Sep 16 12:54:42.792: INFO: stderr: ""
Sep 16 12:54:42.792: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-7486\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.0.73.137\nIPs:               10.0.73.137\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.3.87:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 16 12:54:42.799: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-7486 describe node kt2-5be7f4b0-16de-master'
Sep 16 12:54:42.983: INFO: stderr: ""
Sep 16 12:54:42.983: INFO: stdout: "Name:               kt2-5be7f4b0-16de-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-central1\n                    failure-domain.beta.kubernetes.io/zone=us-central1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kt2-5be7f4b0-16de-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-central1\n                    topology.kubernetes.io/zone=us-central1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 16 Sep 2021 11:35:08 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  kt2-5be7f4b0-16de-master\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 16 Sep 2021 12:54:41 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 16 Sep 2021 11:35:25 +0000   Thu, 16 Sep 2021 11:35:25 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Thu, 16 Sep 2021 12:50:54 +0000   Thu, 16 Sep 2021 11:35:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 16 Sep 2021 12:50:54 +0000   Thu, 16 Sep 2021 11:35:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 16 Sep 2021 12:50:54 +0000   Thu, 16 Sep 2021 11:35:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 16 Sep 2021 12:50:54 +0000   Thu, 16 Sep 2021 11:35:18 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.128.0.2\n  ExternalIP:   35.222.34.167\n  InternalDNS:  kt2-5be7f4b0-16de-master.c.k8s-infra-e2e-boskos-038.internal\n  Hostname:     kt2-5be7f4b0-16de-master.c.k8s-infra-e2e-boskos-038.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3773744Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3517744Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 3f80a3946b4aec1592a633a0b4700ffb\n  System UUID:                3f80a394-6b4a-ec15-92a6-33a0b4700ffb\n  Boot ID:                    3a75a020-8bca-41b0-8d04-6616307f7d8a\n  Kernel Version:             5.4.129+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.6\n  Kubelet Version:            v1.23.0-alpha.2.40+bea2e462a5b8c2\n  Kube-Proxy Version:         v1.23.0-alpha.2.40+bea2e462a5b8c2\nPodCIDR:                      10.64.0.0/24\nPodCIDRs:                     10.64.0.0/24\nProviderID:                   gce://k8s-infra-e2e-boskos-038/us-central1-b/kt2-5be7f4b0-16de-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-server-events-kt2-5be7f4b0-16de-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         79m\n  kube-system                 etcd-server-kt2-5be7f4b0-16de-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         79m\n  kube-system                 fluentd-gcp-v3.2.0-s6vk8                            100m (10%)    1 (100%)    200Mi (5%)       500Mi (14%)    76m\n  kube-system                 konnectivity-server-kt2-5be7f4b0-16de-master        25m (2%)      0 (0%)      0 (0%)           0 (0%)         79m\n  kube-system                 kube-addon-manager-kt2-5be7f4b0-16de-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         78m\n  kube-system                 kube-apiserver-kt2-5be7f4b0-16de-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                 kube-controller-manager-kt2-5be7f4b0-16de-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                 kube-scheduler-kt2-5be7f4b0-16de-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         79m\n  kube-system                 l7-lb-controller-kt2-5be7f4b0-16de-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         79m\n  kube-system                 metadata-proxy-v0.1-tzbj4                           32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      79m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests     Limits\n  --------                   --------     ------\n  cpu                        997m (99%)   1032m (103%)\n  memory                     345Mi (10%)  545Mi (15%)\n  ephemeral-storage          0 (0%)       0 (0%)\n  hugepages-2Mi              0 (0%)       0 (0%)\n  attachable-volumes-gce-pd  0            0\nEvents:                      <none>\n"
Sep 16 12:54:42.983: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=kubectl-7486 describe namespace kubectl-7486'
Sep 16 12:54:43.234: INFO: stderr: ""
Sep 16 12:54:43.234: INFO: stdout: "Name:         kubectl-7486\nLabels:       e2e-framework=kubectl\n              e2e-run=b403288e-7c8d-44f9-a5a1-8d0cdbfce5f9\n              kubernetes.io/metadata.name=kubectl-7486\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:54:43.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7486" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":346,"completed":250,"skipped":4437,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}

------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-2d0fe438-bde8-490c-8e67-ec986ddc4d35
STEP: Creating a pod to test consume configMaps
Sep 16 12:54:43.507: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66925fb3-b474-4115-baa6-4565e3602ccf" in namespace "projected-6930" to be "Succeeded or Failed"
Sep 16 12:54:43.521: INFO: Pod "pod-projected-configmaps-66925fb3-b474-4115-baa6-4565e3602ccf": Phase="Pending", Reason="", readiness=false. Elapsed: 14.289431ms
Sep 16 12:54:45.525: INFO: Pod "pod-projected-configmaps-66925fb3-b474-4115-baa6-4565e3602ccf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01845363s
STEP: Saw pod success
Sep 16 12:54:45.525: INFO: Pod "pod-projected-configmaps-66925fb3-b474-4115-baa6-4565e3602ccf" satisfied condition "Succeeded or Failed"
Sep 16 12:54:45.528: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-projected-configmaps-66925fb3-b474-4115-baa6-4565e3602ccf container agnhost-container: <nil>
STEP: delete the pod
Sep 16 12:54:45.553: INFO: Waiting for pod pod-projected-configmaps-66925fb3-b474-4115-baa6-4565e3602ccf to disappear
Sep 16 12:54:45.557: INFO: Pod pod-projected-configmaps-66925fb3-b474-4115-baa6-4565e3602ccf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:54:45.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6930" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":251,"skipped":4437,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 62 lines ...
• [SLOW TEST:11.635 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":252,"skipped":4462,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSS
------------------------------
[sig-node] Lease 
  lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:54:57.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-3632" for this suite.
•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":253,"skipped":4468,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Sep 16 12:55:01.816: INFO: Pod "test-recreate-deployment-785fd889-2vqgn" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-785fd889-2vqgn test-recreate-deployment-785fd889- deployment-3482  17fcabf4-6ab0-45a1-96c0-db6e253f3ef4 19287 0 2021-09-16 12:55:01 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:785fd889] map[] [{apps/v1 ReplicaSet test-recreate-deployment-785fd889 5c41456f-8cdd-44af-a688-ee3f028ace05 0xc004ceaeff 0xc004ceaf10}] []  [{kube-controller-manager Update v1 2021-09-16 12:55:01 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5c41456f-8cdd-44af-a688-ee3f028ace05\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 12:55:01 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2x7cm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2x7cm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-2z4b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:55:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:55:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:55:01 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 12:55:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-16 12:55:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:55:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3482" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":254,"skipped":4492,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:55:02.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8929" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":255,"skipped":4513,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-clusterip-transition in namespace services-1974
I0916 12:55:02.257886   96838 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-1974, replica count: 3
I0916 12:55:05.309100   96838 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 16 12:55:05.316: INFO: Creating new exec pod
Sep 16 12:55:08.355: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1974 exec execpod-affinityn8nz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 16 12:55:09.662: INFO: rc: 1
Sep 16 12:55:09.662: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1974 exec execpod-affinityn8nz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: connect to affinity-clusterip-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:55:10.662: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1974 exec execpod-affinityn8nz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 16 12:55:11.837: INFO: rc: 1
Sep 16 12:55:11.837: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1974 exec execpod-affinityn8nz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-transition 80
nc: connect to affinity-clusterip-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 12:55:12.662: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1974 exec execpod-affinityn8nz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80'
Sep 16 12:55:12.884: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n"
Sep 16 12:55:12.884: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 16 12:55:12.884: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-1974 exec execpod-affinityn8nz9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.67.158 80'
... skipping 71 lines ...
• [SLOW TEST:45.007 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":256,"skipped":4528,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should succeed in writing subpaths in container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 26 lines ...
• [SLOW TEST:37.068 seconds]
[sig-node] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should succeed in writing subpaths in container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":257,"skipped":4538,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] HostPort 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] HostPort
... skipping 35 lines ...
• [SLOW TEST:15.571 seconds]
[sig-network] HostPort
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":258,"skipped":4564,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 33 lines ...
• [SLOW TEST:20.110 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":259,"skipped":4610,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 12:56:59.915: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c77c49a5-54b3-4da5-a92f-d7b94ff8639a" in namespace "downward-api-5015" to be "Succeeded or Failed"
Sep 16 12:56:59.925: INFO: Pod "downwardapi-volume-c77c49a5-54b3-4da5-a92f-d7b94ff8639a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.08122ms
Sep 16 12:57:01.930: INFO: Pod "downwardapi-volume-c77c49a5-54b3-4da5-a92f-d7b94ff8639a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015003533s
STEP: Saw pod success
Sep 16 12:57:01.930: INFO: Pod "downwardapi-volume-c77c49a5-54b3-4da5-a92f-d7b94ff8639a" satisfied condition "Succeeded or Failed"
Sep 16 12:57:01.933: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-c77c49a5-54b3-4da5-a92f-d7b94ff8639a container client-container: <nil>
STEP: delete the pod
Sep 16 12:57:01.980: INFO: Waiting for pod downwardapi-volume-c77c49a5-54b3-4da5-a92f-d7b94ff8639a to disappear
Sep 16 12:57:01.995: INFO: Pod downwardapi-volume-c77c49a5-54b3-4da5-a92f-d7b94ff8639a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:57:01.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5015" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":260,"skipped":4654,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-6b08e4e5-dcc8-4f58-8b1e-84fcd2fd944f
STEP: Creating a pod to test consume secrets
Sep 16 12:57:02.077: INFO: Waiting up to 5m0s for pod "pod-secrets-0c7a8b64-7894-4832-b4fd-873a1a41eddf" in namespace "secrets-116" to be "Succeeded or Failed"
Sep 16 12:57:02.082: INFO: Pod "pod-secrets-0c7a8b64-7894-4832-b4fd-873a1a41eddf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.068665ms
Sep 16 12:57:04.087: INFO: Pod "pod-secrets-0c7a8b64-7894-4832-b4fd-873a1a41eddf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00989635s
STEP: Saw pod success
Sep 16 12:57:04.087: INFO: Pod "pod-secrets-0c7a8b64-7894-4832-b4fd-873a1a41eddf" satisfied condition "Succeeded or Failed"
Sep 16 12:57:04.091: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-secrets-0c7a8b64-7894-4832-b4fd-873a1a41eddf container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 12:57:04.117: INFO: Waiting for pod pod-secrets-0c7a8b64-7894-4832-b4fd-873a1a41eddf to disappear
Sep 16 12:57:04.121: INFO: Pod pod-secrets-0c7a8b64-7894-4832-b4fd-873a1a41eddf no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:57:04.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-116" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":261,"skipped":4655,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-c6a7ade0-6cfb-4220-af2c-7c671d3b7f42
STEP: Creating a pod to test consume configMaps
Sep 16 12:57:04.233: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4f4b7e34-255b-4bbb-9f3e-9ac803f645eb" in namespace "projected-9855" to be "Succeeded or Failed"
Sep 16 12:57:04.248: INFO: Pod "pod-projected-configmaps-4f4b7e34-255b-4bbb-9f3e-9ac803f645eb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.902991ms
Sep 16 12:57:06.257: INFO: Pod "pod-projected-configmaps-4f4b7e34-255b-4bbb-9f3e-9ac803f645eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024119765s
STEP: Saw pod success
Sep 16 12:57:06.257: INFO: Pod "pod-projected-configmaps-4f4b7e34-255b-4bbb-9f3e-9ac803f645eb" satisfied condition "Succeeded or Failed"
Sep 16 12:57:06.260: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-projected-configmaps-4f4b7e34-255b-4bbb-9f3e-9ac803f645eb container agnhost-container: <nil>
STEP: delete the pod
Sep 16 12:57:06.289: INFO: Waiting for pod pod-projected-configmaps-4f4b7e34-255b-4bbb-9f3e-9ac803f645eb to disappear
Sep 16 12:57:06.295: INFO: Pod pod-projected-configmaps-4f4b7e34-255b-4bbb-9f3e-9ac803f645eb no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:57:06.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9855" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":262,"skipped":4661,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-hkl8
STEP: Creating a pod to test atomic-volume-subpath
Sep 16 12:57:06.394: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hkl8" in namespace "subpath-8011" to be "Succeeded or Failed"
Sep 16 12:57:06.406: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Pending", Reason="", readiness=false. Elapsed: 12.160741ms
Sep 16 12:57:08.413: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 2.018265831s
Sep 16 12:57:10.417: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 4.022988967s
Sep 16 12:57:12.422: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 6.027372265s
Sep 16 12:57:14.426: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 8.031913367s
Sep 16 12:57:16.432: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 10.037343624s
Sep 16 12:57:18.438: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 12.043699651s
Sep 16 12:57:20.442: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 14.04821461s
Sep 16 12:57:22.448: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 16.054126586s
Sep 16 12:57:24.454: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 18.059561006s
Sep 16 12:57:26.460: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Running", Reason="", readiness=true. Elapsed: 20.065392784s
Sep 16 12:57:28.483: INFO: Pod "pod-subpath-test-configmap-hkl8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.089230353s
STEP: Saw pod success
Sep 16 12:57:28.484: INFO: Pod "pod-subpath-test-configmap-hkl8" satisfied condition "Succeeded or Failed"
Sep 16 12:57:28.498: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-subpath-test-configmap-hkl8 container test-container-subpath-configmap-hkl8: <nil>
STEP: delete the pod
Sep 16 12:57:28.592: INFO: Waiting for pod pod-subpath-test-configmap-hkl8 to disappear
Sep 16 12:57:28.598: INFO: Pod pod-subpath-test-configmap-hkl8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-hkl8
Sep 16 12:57:28.598: INFO: Deleting pod "pod-subpath-test-configmap-hkl8" in namespace "subpath-8011"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":263,"skipped":4666,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 17 lines ...
STEP: verifying the updated pod is in kubernetes
Sep 16 12:57:33.258: INFO: Pod update OK
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 12:57:33.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6418" for this suite.
•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":264,"skipped":4727,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
• [SLOW TEST:36.700 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":265,"skipped":4743,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 81 lines ...
• [SLOW TEST:304.311 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":266,"skipped":4787,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 13 lines ...
• [SLOW TEST:5.267 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":267,"skipped":4787,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 28 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":268,"skipped":4792,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 43 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1238
    should create services for rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":346,"completed":269,"skipped":4804,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSS
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 10 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with one valid and two invalid sysctls
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:03:32.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-4214" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":270,"skipped":4807,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 69 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":346,"completed":271,"skipped":4827,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-rkxv
STEP: Creating a pod to test atomic-volume-subpath
Sep 16 13:03:40.141: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rkxv" in namespace "subpath-4237" to be "Succeeded or Failed"
Sep 16 13:03:40.151: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Pending", Reason="", readiness=false. Elapsed: 9.488415ms
Sep 16 13:03:42.156: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 2.014668703s
Sep 16 13:03:44.161: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 4.019860834s
Sep 16 13:03:46.165: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 6.023833886s
Sep 16 13:03:48.170: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 8.028922758s
Sep 16 13:03:50.174: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 10.033081186s
Sep 16 13:03:52.181: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 12.039841678s
Sep 16 13:03:54.186: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 14.044596521s
Sep 16 13:03:56.191: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 16.049666658s
Sep 16 13:03:58.197: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 18.055659653s
Sep 16 13:04:00.201: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Running", Reason="", readiness=true. Elapsed: 20.059820525s
Sep 16 13:04:02.206: INFO: Pod "pod-subpath-test-configmap-rkxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.064465944s
STEP: Saw pod success
Sep 16 13:04:02.206: INFO: Pod "pod-subpath-test-configmap-rkxv" satisfied condition "Succeeded or Failed"
Sep 16 13:04:02.209: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-subpath-test-configmap-rkxv container test-container-subpath-configmap-rkxv: <nil>
STEP: delete the pod
Sep 16 13:04:02.237: INFO: Waiting for pod pod-subpath-test-configmap-rkxv to disappear
Sep 16 13:04:02.244: INFO: Pod pod-subpath-test-configmap-rkxv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rkxv
Sep 16 13:04:02.244: INFO: Deleting pod "pod-subpath-test-configmap-rkxv" in namespace "subpath-4237"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":272,"skipped":4911,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
S
------------------------------
[sig-apps] DisruptionController 
  should observe PodDisruptionBudget status updated [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 18 lines ...
• [SLOW TEST:6.204 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":273,"skipped":4912,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:9.875 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":274,"skipped":4917,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 63 lines ...
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0916 13:05:24.427778    2874 boskos.go:86] Sending heartbeat to Boskos
I0916 13:10:24.454957    2874 boskos.go:86] Sending heartbeat to Boskos
Sep 16 13:14:18.703: INFO: Timed out waiting for the following pods to schedule
Sep 16 13:14:18.703: INFO: kube-system/konnectivity-agent-85q4n
Sep 16 13:14:18.703: FAIL: Timed out after 10m0s waiting for stable cluster.

Full Stack Trace
k8s.io/kubernetes/test/e2e/scheduling.glob..func4.5()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323 +0x8b
k8s.io/kubernetes/test/e2e.RunE2ETests(0x229aa57)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:128 +0x697
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 16 13:14:18.703: Timed out after 10m0s waiting for stable cluster.

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323
------------------------------
{"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":346,"completed":274,"skipped":4974,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-3a34e596-7b52-4baa-a830-5bb3a097bb3e
STEP: Creating a pod to test consume secrets
Sep 16 13:14:19.578: INFO: Waiting up to 5m0s for pod "pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290" in namespace "secrets-2596" to be "Succeeded or Failed"
Sep 16 13:14:19.586: INFO: Pod "pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290": Phase="Pending", Reason="", readiness=false. Elapsed: 7.358235ms
Sep 16 13:14:21.592: INFO: Pod "pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290": Phase="Running", Reason="", readiness=true. Elapsed: 2.013386548s
Sep 16 13:14:23.598: INFO: Pod "pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019982976s
STEP: Saw pod success
Sep 16 13:14:23.598: INFO: Pod "pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290" satisfied condition "Succeeded or Failed"
Sep 16 13:14:23.605: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290 container secret-env-test: <nil>
STEP: delete the pod
Sep 16 13:14:23.641: INFO: Waiting for pod pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290 to disappear
Sep 16 13:14:23.648: INFO: Pod pod-secrets-a0200c6a-4e31-46d3-b9ce-9cc9d85fd290 no longer exists
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:14:23.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2596" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":275,"skipped":4991,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:14:23.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8597" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":276,"skipped":4999,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] server version
... skipping 11 lines ...
Sep 16 13:14:23.967: INFO: cleanMinorVersion: 23
Sep 16 13:14:23.967: INFO: Minor version: 23+
[AfterEach] [sig-api-machinery] server version
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:14:23.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-7863" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":277,"skipped":5005,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 13:14:24.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99762089-ae01-4c8c-ba9f-1c8c926f0163" in namespace "projected-8396" to be "Succeeded or Failed"
Sep 16 13:14:24.114: INFO: Pod "downwardapi-volume-99762089-ae01-4c8c-ba9f-1c8c926f0163": Phase="Pending", Reason="", readiness=false. Elapsed: 24.166657ms
Sep 16 13:14:26.119: INFO: Pod "downwardapi-volume-99762089-ae01-4c8c-ba9f-1c8c926f0163": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.029016356s
STEP: Saw pod success
Sep 16 13:14:26.119: INFO: Pod "downwardapi-volume-99762089-ae01-4c8c-ba9f-1c8c926f0163" satisfied condition "Succeeded or Failed"
Sep 16 13:14:26.122: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-99762089-ae01-4c8c-ba9f-1c8c926f0163 container client-container: <nil>
STEP: delete the pod
Sep 16 13:14:26.151: INFO: Waiting for pod downwardapi-volume-99762089-ae01-4c8c-ba9f-1c8c926f0163 to disappear
Sep 16 13:14:26.155: INFO: Pod downwardapi-volume-99762089-ae01-4c8c-ba9f-1c8c926f0163 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:14:26.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8396" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":278,"skipped":5035,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 18 lines ...
• [SLOW TEST:12.526 seconds]
[sig-storage] Projected configMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":279,"skipped":5062,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 16 13:14:41.166: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:14:41.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9786" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":280,"skipped":5093,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 170 lines ...
Sep 16 13:19:53.708: INFO: The status of Pod netserver-2 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 13:19:55.705: INFO: The status of Pod netserver-2 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 13:19:57.713: INFO: The status of Pod netserver-2 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 13:19:59.705: INFO: The status of Pod netserver-2 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 13:20:01.706: INFO: The status of Pod netserver-2 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 13:20:01.710: INFO: The status of Pod netserver-2 is Pending, waiting for it to be Running (with Ready = true)
Sep 16 13:20:01.710: FAIL: Unexpected error:
    <*errors.errorString | 0xc000244220>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

... skipping 122 lines ...
Logging node info for node kt2-5be7f4b0-16de-minion-group-lhnl
Sep 16 13:20:02.410: INFO: Node Info: &Node{ObjectMeta:{kt2-5be7f4b0-16de-minion-group-lhnl    d434588a-df2d-409e-82e6-873db6bb4433 22499 0 2021-09-16 11:35:17 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-central1 failure-domain.beta.kubernetes.io/zone:us-central1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:kt2-5be7f4b0-16de-minion-group-lhnl kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-central1 topology.kubernetes.io/zone:us-central1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  [{kubelet Update v1 2021-09-16 11:35:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {node-problem-detector Update v1 2021-09-16 11:35:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kubelet Update v1 2021-09-16 11:54:09 +0000 UTC FieldsV1 {"f:status":{"f:allocatable":{"f:ephemeral-storage":{}},"f:capacity":{"f:ephemeral-storage":{}},"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{}}},"f:images":{}}} status} {kube-controller-manager Update v1 2021-09-16 13:15:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}},"f:taints":{}}} } {kube-controller-manager Update v1 2021-09-16 13:15:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}},"k:{\"type\":\"Ready\"}":{"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-infra-e2e-boskos-038/us-central1-b/kt2-5be7f4b0-16de-minion-group-lhnl,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2021-09-16 13:15:05 +0000 UTC,},Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2021-09-16 13:15:10 +0000 UTC,},},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7821434880 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7559290880 0} {<nil>}  BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2021-09-16 13:10:33 +0000 UTC,LastTransitionTime:2021-09-16 11:35:21 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2021-09-16 13:10:33 +0000 UTC,LastTransitionTime:2021-09-16 11:35:21 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2021-09-16 13:10:33 +0000 UTC,LastTransitionTime:2021-09-16 11:35:21 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2021-09-16 13:10:33 +0000 UTC,LastTransitionTime:2021-09-16 11:35:21 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2021-09-16 13:10:33 +0000 UTC,LastTransitionTime:2021-09-16 11:35:21 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2021-09-16 13:10:33 +0000 UTC,LastTransitionTime:2021-09-16 11:35:21 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2021-09-16 13:10:33 +0000 UTC,LastTransitionTime:2021-09-16 11:35:21 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2021-09-16 11:35:32 +0000 UTC,LastTransitionTime:2021-09-16 11:35:32 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2021-09-16 13:13:25 +0000 UTC,LastTransitionTime:2021-09-16 13:15:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2021-09-16 13:13:25 +0000 UTC,LastTransitionTime:2021-09-16 13:15:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2021-09-16 13:13:25 +0000 UTC,LastTransitionTime:2021-09-16 13:15:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:2021-09-16 13:13:25 +0000 UTC,LastTransitionTime:2021-09-16 13:15:05 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.128.0.5,},NodeAddress{Type:ExternalIP,Address:35.222.158.211,},NodeAddress{Type:InternalDNS,Address:kt2-5be7f4b0-16de-minion-group-lhnl.c.k8s-infra-e2e-boskos-038.internal,},NodeAddress{Type:Hostname,Address:kt2-5be7f4b0-16de-minion-group-lhnl.c.k8s-infra-e2e-boskos-038.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:caff7398a32d58236ec8e4f24f1c4486,SystemUUID:caff7398-a32d-5823-6ec8-e4f24f1c4486,BootID:b43560ab-d554-43d5-9ea4-b8bf1ecf980b,KernelVersion:5.4.129+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.4.6,KubeletVersion:v1.23.0-alpha.2.40+bea2e462a5b8c2,KubeProxyVersion:v1.23.0-alpha.2.40+bea2e462a5b8c2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/kube-proxy-amd64:v1.23.0-alpha.2.40_bea2e462a5b8c2],SizeBytes:104401605,},ContainerImage{Names:[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:810aadac042cee96db99d94cec2436482b886774681b44bb141254edda5e3cdf gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.17],SizeBytes:84029209,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nautilus@sha256:99c0d6f1ad24a1aa1905d9c6534d193f268f7b23f9add2ae6bb41f31094bdd5c k8s.gcr.io/e2e-test-images/nautilus:1.5],SizeBytes:49642095,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43 k8s.gcr.io/e2e-test-images/agnhost:2.33],SizeBytes:49628485,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:20f25f275d46aa728f7615a1ccc19c78b2ed89435bf943a44b339f70f45508e6 k8s.gcr.io/e2e-test-images/httpd:2.4.39-2],SizeBytes:41902010,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3 k8s.gcr.io/e2e-test-images/httpd:2.4.38-2],SizeBytes:40764680,},ContainerImage{Names:[k8s.gcr.io/metrics-server/metrics-server@sha256:6c5603956c0aed6b4087a8716afce8eb22f664b13162346ee852b4fab305ca15 k8s.gcr.io/metrics-server/metrics-server:v0.5.0],SizeBytes:25804692,},ContainerImage{Names:[k8s.gcr.io/coredns/coredns@sha256:cc8fb77bc2a0541949d1d9320a641b82fd392b0d3d8145469ca4709ae769980e k8s.gcr.io/coredns/coredns:v1.8.0],SizeBytes:12945155,},ContainerImage{Names:[k8s.gcr.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 k8s.gcr.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[k8s.gcr.io/kas-network-proxy/proxy-agent@sha256:13f524458dee3a4b78eb3fd4a8c28929124bc969abd830bd72b5df2847ddfa38 k8s.gcr.io/kas-network-proxy/proxy-agent:v0.0.23],SizeBytes:7400170,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/nginx@sha256:13616070e3f29de4417eee434a8ef472221c9e51b3d037b5a6b46cef08eb7443 k8s.gcr.io/e2e-test-images/nginx:1.14-2],SizeBytes:6979041,},ContainerImage{Names:[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf k8s.gcr.io/e2e-test-images/busybox:1.29-2],SizeBytes:732424,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db k8s.gcr.io/pause:3.6],SizeBytes:301773,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Sep 16 13:20:02.411: INFO: 
Logging kubelet events for node kt2-5be7f4b0-16de-minion-group-lhnl
Sep 16 13:20:02.417: INFO: 
Logging pods the kubelet thinks is on node kt2-5be7f4b0-16de-minion-group-lhnl
Sep 16 13:20:07.449: INFO: Unable to retrieve kubelet pods for node kt2-5be7f4b0-16de-minion-group-lhnl: error trying to reach service: dial tcp 10.128.0.5:10250: i/o timeout
Sep 16 13:20:07.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
Sep 16 13:20:07.462: INFO: Condition Ready of node kt2-5be7f4b0-16de-minion-group-lhnl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2021-09-16 13:15:05 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-09-16 13:15:10 +0000 UTC}]. Failure
Sep 16 13:20:09.470: INFO: Condition Ready of node kt2-5be7f4b0-16de-minion-group-lhnl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2021-09-16 13:15:05 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-09-16 13:15:10 +0000 UTC}]. Failure
Sep 16 13:20:11.469: INFO: Condition Ready of node kt2-5be7f4b0-16de-minion-group-lhnl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2021-09-16 13:15:05 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-09-16 13:15:10 +0000 UTC}]. Failure
Sep 16 13:20:13.471: INFO: Condition Ready of node kt2-5be7f4b0-16de-minion-group-lhnl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2021-09-16 13:15:05 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-09-16 13:15:10 +0000 UTC}]. Failure
Sep 16 13:20:15.469: INFO: Condition Ready of node kt2-5be7f4b0-16de-minion-group-lhnl is false, but Node is tainted by NodeController with [{node.kubernetes.io/unreachable  NoSchedule 2021-09-16 13:15:05 +0000 UTC} {node.kubernetes.io/unreachable  NoExecute 2021-09-16 13:15:10 +0000 UTC}]. Failure
... skipping 55 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

    Sep 16 13:20:01.710: Unexpected error:
        <*errors.errorString | 0xc000244220>: {
            s: "timed out waiting for the condition",
        }
        timed out waiting for the condition
    occurred

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858
------------------------------
{"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":280,"skipped":5140,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 13:21:57.617: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:21:58.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8650" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":346,"completed":281,"skipped":5149,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 21 lines ...
I0916 13:22:05.795948   96838 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4232, replica count: 3
I0916 13:22:08.847618   96838 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0916 13:22:11.848626   96838 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 16 13:22:11.861: INFO: Creating new exec pod
Sep 16 13:22:14.891: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-4232 exec execpod-affinity6vnvd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Sep 16 13:22:16.466: INFO: rc: 1
Sep 16 13:22:16.466: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-4232 exec execpod-affinity6vnvd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 13:22:17.467: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-4232 exec execpod-affinity6vnvd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Sep 16 13:22:17.921: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n"
Sep 16 13:22:17.921: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 16 13:22:17.921: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-4232 exec execpod-affinity6vnvd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.180.150 80'
... skipping 38 lines ...
• [SLOW TEST:42.854 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":282,"skipped":5156,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 17 lines ...
• [SLOW TEST:31.097 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":283,"skipped":5160,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 46 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":284,"skipped":5173,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should replace jobs when ReplaceConcurrent [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 18 lines ...
• [SLOW TEST:80.119 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":285,"skipped":5220,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Sep 16 13:25:02.930: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:02.950: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:02.961: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:03.021: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:03.030: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:03.039: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:03.039: INFO: Lookups using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local]

Sep 16 13:25:08.048: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.055: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.062: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.118: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.127: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.219: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.227: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.419: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:08.419: INFO: Lookups using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local]

Sep 16 13:25:13.051: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.059: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.120: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.129: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.137: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.221: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.242: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.321: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:13.321: INFO: Lookups using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local]

Sep 16 13:25:18.049: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.056: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.118: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.125: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.134: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.250: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.393: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:18.393: INFO: Lookups using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local]

Sep 16 13:25:23.049: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.054: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.061: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.070: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.319: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.329: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.418: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.519: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:23.519: INFO: Lookups using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local]

I0916 13:25:24.531372    2874 boskos.go:86] Sending heartbeat to Boskos
Sep 16 13:25:28.057: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.070: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.080: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.122: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.132: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.221: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.238: INFO: Unable to read jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.251: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:28.251: INFO: Lookups using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local jessie_udp@dns-test-service-2.dns-4465.svc.cluster.local jessie_tcp@dns-test-service-2.dns-4465.svc.cluster.local]

Sep 16 13:25:33.050: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:33.064: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:33.118: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:33.127: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local from pod dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571: the server could not find the requested resource (get pods dns-test-bd715979-8c63-442f-923e-276c1e5d4571)
Sep 16 13:25:33.419: INFO: Lookups using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-4465.svc.cluster.local wheezy_udp@dns-test-service-2.dns-4465.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-4465.svc.cluster.local]

Sep 16 13:25:38.320: INFO: DNS probes using dns-4465/dns-test-bd715979-8c63-442f-923e-276c1e5d4571 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 4 lines ...
• [SLOW TEST:37.779 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":286,"skipped":5289,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should verify changes to a daemon set status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 68 lines ...
• [SLOW TEST:6.309 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should verify changes to a daemon set status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":287,"skipped":5327,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 16 13:25:44.794: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Sep 16 13:25:44.872: INFO: Waiting up to 5m0s for pod "client-containers-98a45424-4122-43b9-9771-91997a94ed8f" in namespace "containers-2084" to be "Succeeded or Failed"
Sep 16 13:25:44.880: INFO: Pod "client-containers-98a45424-4122-43b9-9771-91997a94ed8f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.5052ms
Sep 16 13:25:46.884: INFO: Pod "client-containers-98a45424-4122-43b9-9771-91997a94ed8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012077671s
STEP: Saw pod success
Sep 16 13:25:46.884: INFO: Pod "client-containers-98a45424-4122-43b9-9771-91997a94ed8f" satisfied condition "Succeeded or Failed"
Sep 16 13:25:46.888: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod client-containers-98a45424-4122-43b9-9771-91997a94ed8f container agnhost-container: <nil>
STEP: delete the pod
Sep 16 13:25:47.035: INFO: Waiting for pod client-containers-98a45424-4122-43b9-9771-91997a94ed8f to disappear
Sep 16 13:25:47.039: INFO: Pod client-containers-98a45424-4122-43b9-9771-91997a94ed8f no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:25:47.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2084" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":288,"skipped":5339,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}

------------------------------
[sig-apps] DisruptionController 
  should update/patch PodDisruptionBudget status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 15 lines ...
STEP: Patching PodDisruptionBudget status
STEP: Waiting for the pdb to be processed
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:25:51.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-9716" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":289,"skipped":5339,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events
... skipping 14 lines ...
STEP: check that the list of events matches the requested quantity
Sep 16 13:25:51.377: INFO: requesting list of events to confirm quantity
[AfterEach] [sig-instrumentation] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:25:51.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2398" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":290,"skipped":5359,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 13:25:51.392: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 13:25:53.480: INFO: Deleting pod "var-expansion-0bb1fa28-af7e-4c72-b6d7-a44b6d17f479" in namespace "var-expansion-5624"
Sep 16 13:25:53.488: INFO: Wait up to 5m0s for pod "var-expansion-0bb1fa28-af7e-4c72-b6d7-a44b6d17f479" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:25:55.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-5624" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":291,"skipped":5361,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 13:25:55.759: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:25:59.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4335" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":346,"completed":292,"skipped":5364,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 15 lines ...
• [SLOW TEST:20.151 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":293,"skipped":5372,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 14 lines ...
STEP: Creating configMap with name cm-test-opt-create-dcad70ff-1945-4faf-b23e-9eaf10888fbf
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:26:24.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2636" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":294,"skipped":5383,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 13:26:24.231: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 16 13:26:24.297: INFO: Waiting up to 5m0s for pod "pod-c84b722a-17a6-4278-985a-ed7c36e6088a" in namespace "emptydir-3689" to be "Succeeded or Failed"
Sep 16 13:26:24.305: INFO: Pod "pod-c84b722a-17a6-4278-985a-ed7c36e6088a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.383106ms
Sep 16 13:26:26.310: INFO: Pod "pod-c84b722a-17a6-4278-985a-ed7c36e6088a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012637084s
STEP: Saw pod success
Sep 16 13:26:26.310: INFO: Pod "pod-c84b722a-17a6-4278-985a-ed7c36e6088a" satisfied condition "Succeeded or Failed"
Sep 16 13:26:26.313: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-c84b722a-17a6-4278-985a-ed7c36e6088a container test-container: <nil>
STEP: delete the pod
Sep 16 13:26:26.354: INFO: Waiting for pod pod-c84b722a-17a6-4278-985a-ed7c36e6088a to disappear
Sep 16 13:26:26.359: INFO: Pod pod-c84b722a-17a6-4278-985a-ed7c36e6088a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:26:26.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3689" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":295,"skipped":5387,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 13:26:26.508: INFO: Waiting up to 5m0s for pod "downwardapi-volume-668b8490-70ec-4459-b52b-7417563712a7" in namespace "downward-api-6402" to be "Succeeded or Failed"
Sep 16 13:26:26.544: INFO: Pod "downwardapi-volume-668b8490-70ec-4459-b52b-7417563712a7": Phase="Pending", Reason="", readiness=false. Elapsed: 35.351249ms
Sep 16 13:26:28.551: INFO: Pod "downwardapi-volume-668b8490-70ec-4459-b52b-7417563712a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.042222725s
STEP: Saw pod success
Sep 16 13:26:28.551: INFO: Pod "downwardapi-volume-668b8490-70ec-4459-b52b-7417563712a7" satisfied condition "Succeeded or Failed"
Sep 16 13:26:28.554: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod downwardapi-volume-668b8490-70ec-4459-b52b-7417563712a7 container client-container: <nil>
STEP: delete the pod
Sep 16 13:26:28.582: INFO: Waiting for pod downwardapi-volume-668b8490-70ec-4459-b52b-7417563712a7 to disappear
Sep 16 13:26:28.586: INFO: Pod downwardapi-volume-668b8490-70ec-4459-b52b-7417563712a7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:26:28.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6402" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":296,"skipped":5392,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-a141f144-804f-44ea-9529-ae1fad0af13f
STEP: Creating a pod to test consume configMaps
Sep 16 13:26:28.690: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ca0020e-383c-41c5-82f8-5c85f32b4dd7" in namespace "configmap-5842" to be "Succeeded or Failed"
Sep 16 13:26:28.697: INFO: Pod "pod-configmaps-4ca0020e-383c-41c5-82f8-5c85f32b4dd7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.842542ms
Sep 16 13:26:30.702: INFO: Pod "pod-configmaps-4ca0020e-383c-41c5-82f8-5c85f32b4dd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012733589s
STEP: Saw pod success
Sep 16 13:26:30.702: INFO: Pod "pod-configmaps-4ca0020e-383c-41c5-82f8-5c85f32b4dd7" satisfied condition "Succeeded or Failed"
Sep 16 13:26:30.706: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-2z4b pod pod-configmaps-4ca0020e-383c-41c5-82f8-5c85f32b4dd7 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 13:26:30.738: INFO: Waiting for pod pod-configmaps-4ca0020e-383c-41c5-82f8-5c85f32b4dd7 to disappear
Sep 16 13:26:30.743: INFO: Pod pod-configmaps-4ca0020e-383c-41c5-82f8-5c85f32b4dd7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:26:30.743: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5842" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":297,"skipped":5432,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":298,"skipped":5443,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-e2ec9b77-4bf9-4b2c-ac20-75aa3642a232
STEP: Creating a pod to test consume configMaps
Sep 16 13:26:41.123: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f5c2c1ee-7775-4f06-8533-c154670bca7f" in namespace "projected-1697" to be "Succeeded or Failed"
Sep 16 13:26:41.126: INFO: Pod "pod-projected-configmaps-f5c2c1ee-7775-4f06-8533-c154670bca7f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.172017ms
Sep 16 13:26:43.133: INFO: Pod "pod-projected-configmaps-f5c2c1ee-7775-4f06-8533-c154670bca7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009352766s
STEP: Saw pod success
Sep 16 13:26:43.133: INFO: Pod "pod-projected-configmaps-f5c2c1ee-7775-4f06-8533-c154670bca7f" satisfied condition "Succeeded or Failed"
Sep 16 13:26:43.137: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-projected-configmaps-f5c2c1ee-7775-4f06-8533-c154670bca7f container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep 16 13:26:43.164: INFO: Waiting for pod pod-projected-configmaps-f5c2c1ee-7775-4f06-8533-c154670bca7f to disappear
Sep 16 13:26:43.168: INFO: Pod pod-projected-configmaps-f5c2c1ee-7775-4f06-8533-c154670bca7f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:26:43.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1697" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":299,"skipped":5446,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 16 13:26:43.179: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Sep 16 13:26:43.259: INFO: Waiting up to 5m0s for pod "client-containers-c538755a-b780-4e1c-9585-7c6692499d24" in namespace "containers-4064" to be "Succeeded or Failed"
Sep 16 13:26:43.267: INFO: Pod "client-containers-c538755a-b780-4e1c-9585-7c6692499d24": Phase="Pending", Reason="", readiness=false. Elapsed: 7.056207ms
Sep 16 13:26:45.272: INFO: Pod "client-containers-c538755a-b780-4e1c-9585-7c6692499d24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012970046s
STEP: Saw pod success
Sep 16 13:26:45.272: INFO: Pod "client-containers-c538755a-b780-4e1c-9585-7c6692499d24" satisfied condition "Succeeded or Failed"
Sep 16 13:26:45.276: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod client-containers-c538755a-b780-4e1c-9585-7c6692499d24 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 13:26:45.300: INFO: Waiting for pod client-containers-c538755a-b780-4e1c-9585-7c6692499d24 to disappear
Sep 16 13:26:45.303: INFO: Pod client-containers-c538755a-b780-4e1c-9585-7c6692499d24 no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:26:45.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4064" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":300,"skipped":5463,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 13:26:45.315: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142
[It] should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep 16 13:26:45.415: INFO: DaemonSet pods can't tolerate node kt2-5be7f4b0-16de-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 16 13:26:45.421: INFO: Number of nodes with available pods: 0
Sep 16 13:26:45.421: INFO: Node kt2-5be7f4b0-16de-minion-group-2z4b is running more than one daemon pod
Sep 16 13:26:46.430: INFO: DaemonSet pods can't tolerate node kt2-5be7f4b0-16de-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 16 13:26:46.440: INFO: Number of nodes with available pods: 0
Sep 16 13:26:46.440: INFO: Node kt2-5be7f4b0-16de-minion-group-2z4b is running more than one daemon pod
Sep 16 13:26:47.429: INFO: DaemonSet pods can't tolerate node kt2-5be7f4b0-16de-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 16 13:26:47.438: INFO: Number of nodes with available pods: 3
Sep 16 13:26:47.438: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Sep 16 13:26:47.487: INFO: DaemonSet pods can't tolerate node kt2-5be7f4b0-16de-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 16 13:26:47.499: INFO: Number of nodes with available pods: 2
Sep 16 13:26:47.499: INFO: Node kt2-5be7f4b0-16de-minion-group-2z4b is running more than one daemon pod
Sep 16 13:26:48.509: INFO: DaemonSet pods can't tolerate node kt2-5be7f4b0-16de-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 16 13:26:48.517: INFO: Number of nodes with available pods: 2
Sep 16 13:26:48.517: INFO: Node kt2-5be7f4b0-16de-minion-group-2z4b is running more than one daemon pod
Sep 16 13:26:49.505: INFO: DaemonSet pods can't tolerate node kt2-5be7f4b0-16de-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 16 13:26:49.510: INFO: Number of nodes with available pods: 3
Sep 16 13:26:49.510: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5985, will wait for the garbage collector to delete the pods
Sep 16 13:26:49.590: INFO: Deleting DaemonSet.extensions daemon-set took: 18.692539ms
Sep 16 13:26:49.790: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.677969ms
... skipping 8 lines ...
Sep 16 13:26:52.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5985" for this suite.

• [SLOW TEST:6.723 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":301,"skipped":5523,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 34 lines ...
• [SLOW TEST:7.178 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":302,"skipped":5531,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should complete a service status lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 42 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:26:59.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3340" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":303,"skipped":5545,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
• [SLOW TEST:10.796 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":304,"skipped":5546,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 48 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":305,"skipped":5701,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
[sig-apps] Deployment 
  Deployment should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
Sep 16 13:28:23.364: INFO: Pod "test-new-deployment-5c557bc5bf-hhrs6" is available:
&Pod{ObjectMeta:{test-new-deployment-5c557bc5bf-hhrs6 test-new-deployment-5c557bc5bf- deployment-165  98e82c83-6960-4c23-b610-b1f73d968751 25069 0 2021-09-16 13:28:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5c557bc5bf] map[] [{apps/v1 ReplicaSet test-new-deployment-5c557bc5bf a517c04d-73be-4662-843a-451316f6eeb7 0xc003e8efe0 0xc003e8efe1}] []  [{kube-controller-manager Update v1 2021-09-16 13:28:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a517c04d-73be-4662-843a-451316f6eeb7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 13:28:22 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.117\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2wvd9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2wvd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-5be7f4b0-16de-minion-group-lhnl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 13:28:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 13:28:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 13:28:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 13:28:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:10.64.3.117,StartTime:2021-09-16 13:28:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-16 13:28:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9b563a4e84c632a62984cb0060c1e360bd50d0ba7a2edca8fa62862f13bd7562,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.117,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:28:23.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-165" for this suite.
•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":306,"skipped":5701,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 29 lines ...
• [SLOW TEST:10.290 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":307,"skipped":5714,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-fbf0a207-c810-436e-a7fd-e00f450d0815
STEP: Creating a pod to test consume secrets
Sep 16 13:28:33.891: INFO: Waiting up to 5m0s for pod "pod-secrets-d7403c46-c900-4546-aacb-1e6f29805666" in namespace "secrets-6269" to be "Succeeded or Failed"
Sep 16 13:28:33.900: INFO: Pod "pod-secrets-d7403c46-c900-4546-aacb-1e6f29805666": Phase="Pending", Reason="", readiness=false. Elapsed: 8.379815ms
Sep 16 13:28:35.908: INFO: Pod "pod-secrets-d7403c46-c900-4546-aacb-1e6f29805666": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016554472s
STEP: Saw pod success
Sep 16 13:28:35.908: INFO: Pod "pod-secrets-d7403c46-c900-4546-aacb-1e6f29805666" satisfied condition "Succeeded or Failed"
Sep 16 13:28:35.913: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-secrets-d7403c46-c900-4546-aacb-1e6f29805666 container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 13:28:35.958: INFO: Waiting for pod pod-secrets-d7403c46-c900-4546-aacb-1e6f29805666 to disappear
Sep 16 13:28:35.965: INFO: Pod pod-secrets-d7403c46-c900-4546-aacb-1e6f29805666 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:28:35.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6269" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":308,"skipped":5732,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
S
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 23 lines ...
• [SLOW TEST:13.193 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":309,"skipped":5733,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  Replicaset should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 20 lines ...
• [SLOW TEST:5.257 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":310,"skipped":5745,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 21 lines ...
• [SLOW TEST:10.226 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":346,"completed":311,"skipped":5751,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 13:29:04.666: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 16 13:29:04.751: INFO: Waiting up to 5m0s for pod "pod-549b2e12-9a3c-47ed-a49b-20fad4117982" in namespace "emptydir-1955" to be "Succeeded or Failed"
Sep 16 13:29:04.770: INFO: Pod "pod-549b2e12-9a3c-47ed-a49b-20fad4117982": Phase="Pending", Reason="", readiness=false. Elapsed: 18.985744ms
Sep 16 13:29:06.776: INFO: Pod "pod-549b2e12-9a3c-47ed-a49b-20fad4117982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.024933405s
STEP: Saw pod success
Sep 16 13:29:06.776: INFO: Pod "pod-549b2e12-9a3c-47ed-a49b-20fad4117982" satisfied condition "Succeeded or Failed"
Sep 16 13:29:06.779: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-549b2e12-9a3c-47ed-a49b-20fad4117982 container test-container: <nil>
STEP: delete the pod
Sep 16 13:29:06.805: INFO: Waiting for pod pod-549b2e12-9a3c-47ed-a49b-20fad4117982 to disappear
Sep 16 13:29:06.809: INFO: Pod pod-549b2e12-9a3c-47ed-a49b-20fad4117982 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:29:06.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1955" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":312,"skipped":5751,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-f81135a8-0ebb-42ce-8c45-5abb57b7fede
STEP: Creating a pod to test consume configMaps
Sep 16 13:29:06.915: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ff86ca1b-5f9f-4ebd-8ea5-6411ea1eb342" in namespace "projected-5212" to be "Succeeded or Failed"
Sep 16 13:29:06.921: INFO: Pod "pod-projected-configmaps-ff86ca1b-5f9f-4ebd-8ea5-6411ea1eb342": Phase="Pending", Reason="", readiness=false. Elapsed: 5.840639ms
Sep 16 13:29:08.926: INFO: Pod "pod-projected-configmaps-ff86ca1b-5f9f-4ebd-8ea5-6411ea1eb342": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011462539s
STEP: Saw pod success
Sep 16 13:29:08.926: INFO: Pod "pod-projected-configmaps-ff86ca1b-5f9f-4ebd-8ea5-6411ea1eb342" satisfied condition "Succeeded or Failed"
Sep 16 13:29:08.931: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-projected-configmaps-ff86ca1b-5f9f-4ebd-8ea5-6411ea1eb342 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 13:29:09.000: INFO: Waiting for pod pod-projected-configmaps-ff86ca1b-5f9f-4ebd-8ea5-6411ea1eb342 to disappear
Sep 16 13:29:09.010: INFO: Pod pod-projected-configmaps-ff86ca1b-5f9f-4ebd-8ea5-6411ea1eb342 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:29:09.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5212" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":313,"skipped":5762,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:29:11.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8502" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":314,"skipped":5770,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-6157
I0916 13:29:11.405751   96838 runners.go:193] Created replication controller with name: externalname-service, namespace: services-6157, replica count: 2
Sep 16 13:29:14.456: INFO: Creating new exec pod
I0916 13:29:14.455959   96838 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 16 13:29:19.509: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-6157 exec execpodtb42f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 16 13:29:20.717: INFO: rc: 1
Sep 16 13:29:20.717: INFO: Service reachability failing with error: error running /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-6157 exec execpodtb42f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 16 13:29:21.718: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-6157 exec execpodtb42f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 16 13:29:22.175: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Sep 16 13:29:22.175: INFO: stdout: "externalname-service-jrlxs"
Sep 16 13:29:22.175: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl --server=https://35.222.34.167 --kubeconfig=/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig --namespace=services-6157 exec execpodtb42f -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.144.239 80'
... skipping 19 lines ...
• [SLOW TEST:13.885 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":315,"skipped":5784,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints 
  verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 37 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PriorityClass endpoints
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673
    verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":316,"skipped":5786,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 25 lines ...
• [SLOW TEST:12.448 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":317,"skipped":5809,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-e37ef8fa-5704-4967-8267-eb0382be6b29
STEP: Creating a pod to test consume secrets
Sep 16 13:30:37.994: INFO: Waiting up to 5m0s for pod "pod-secrets-77a4c107-5f11-4213-b65a-9e684a7fac40" in namespace "secrets-8665" to be "Succeeded or Failed"
Sep 16 13:30:38.017: INFO: Pod "pod-secrets-77a4c107-5f11-4213-b65a-9e684a7fac40": Phase="Pending", Reason="", readiness=false. Elapsed: 22.49434ms
Sep 16 13:30:40.021: INFO: Pod "pod-secrets-77a4c107-5f11-4213-b65a-9e684a7fac40": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026834838s
STEP: Saw pod success
Sep 16 13:30:40.021: INFO: Pod "pod-secrets-77a4c107-5f11-4213-b65a-9e684a7fac40" satisfied condition "Succeeded or Failed"
Sep 16 13:30:40.024: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-secrets-77a4c107-5f11-4213-b65a-9e684a7fac40 container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 13:30:40.055: INFO: Waiting for pod pod-secrets-77a4c107-5f11-4213-b65a-9e684a7fac40 to disappear
Sep 16 13:30:40.060: INFO: Pod pod-secrets-77a4c107-5f11-4213-b65a-9e684a7fac40 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:30:40.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8665" for this suite.
STEP: Destroying namespace "secret-namespace-8610" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":318,"skipped":5840,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-d3f496ac-479c-494b-94e9-1e382f6ccc12
STEP: Creating a pod to test consume configMaps
Sep 16 13:30:40.147: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ca02f16-1908-4932-9e7c-e491bb243a07" in namespace "configmap-2912" to be "Succeeded or Failed"
Sep 16 13:30:40.155: INFO: Pod "pod-configmaps-8ca02f16-1908-4932-9e7c-e491bb243a07": Phase="Pending", Reason="", readiness=false. Elapsed: 7.565ms
Sep 16 13:30:42.159: INFO: Pod "pod-configmaps-8ca02f16-1908-4932-9e7c-e491bb243a07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011531353s
STEP: Saw pod success
Sep 16 13:30:42.159: INFO: Pod "pod-configmaps-8ca02f16-1908-4932-9e7c-e491bb243a07" satisfied condition "Succeeded or Failed"
Sep 16 13:30:42.162: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-configmaps-8ca02f16-1908-4932-9e7c-e491bb243a07 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 13:30:42.185: INFO: Waiting for pod pod-configmaps-8ca02f16-1908-4932-9e7c-e491bb243a07 to disappear
Sep 16 13:30:42.192: INFO: Pod pod-configmaps-8ca02f16-1908-4932-9e7c-e491bb243a07 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:30:42.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2912" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":319,"skipped":5885,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 11 lines ...
Sep 16 13:30:44.293: INFO: The status of Pod pod-hostip-f33b6523-d348-49a3-b164-77257b8d3e7c is Running (Ready = true)
Sep 16 13:30:44.299: INFO: Pod pod-hostip-f33b6523-d348-49a3-b164-77257b8d3e7c has hostIP: 10.128.0.5
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:30:44.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5010" for this suite.
•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":320,"skipped":5896,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Sep 16 13:30:44.403: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-458  783a763b-c6cc-4264-bdb5-914a1d66a600 25750 0 2021-09-16 13:30:44 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-09-16 13:30:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Sep 16 13:30:44.403: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-458  783a763b-c6cc-4264-bdb5-914a1d66a600 25751 0 2021-09-16 13:30:44 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-09-16 13:30:44 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:30:44.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-458" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":321,"skipped":5929,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 10 lines ...
Sep 16 13:30:46.587: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
Sep 16 13:30:46.830: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:30:46.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-260" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":322,"skipped":5950,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Service endpoints latency
... skipping 424 lines ...
• [SLOW TEST:11.506 seconds]
[sig-network] Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":346,"completed":323,"skipped":5985,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5181
STEP: Waiting until pod test-pod will start running in namespace statefulset-5181
STEP: Creating statefulset with conflicting port in namespace statefulset-5181
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5181
Sep 16 13:31:00.622: INFO: Observed stateful pod in namespace: statefulset-5181, name: ss-0, uid: 5c61f5b0-3b59-4154-b3c9-7e76f6f781df, status phase: Pending. Waiting for statefulset controller to delete.
Sep 16 13:31:00.662: INFO: Observed stateful pod in namespace: statefulset-5181, name: ss-0, uid: 5c61f5b0-3b59-4154-b3c9-7e76f6f781df, status phase: Failed. Waiting for statefulset controller to delete.
Sep 16 13:31:00.696: INFO: Observed stateful pod in namespace: statefulset-5181, name: ss-0, uid: 5c61f5b0-3b59-4154-b3c9-7e76f6f781df, status phase: Failed. Waiting for statefulset controller to delete.
Sep 16 13:31:00.712: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5181
STEP: Removing pod with conflicting port in namespace statefulset-5181
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5181 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Sep 16 13:31:02.811: INFO: Deleting all statefulset in ns statefulset-5181
... skipping 10 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":324,"skipped":5989,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Sep 16 13:31:13.912: INFO: stderr: ""
Sep 16 13:31:13.912: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:31:13.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4397" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":325,"skipped":6015,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context 
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 2 lines ...
Sep 16 13:31:14.052: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 16 13:31:14.262: INFO: Waiting up to 5m0s for pod "security-context-c96e03cb-0ccc-47a0-ae78-a1954c37b5c2" in namespace "security-context-9732" to be "Succeeded or Failed"
Sep 16 13:31:14.284: INFO: Pod "security-context-c96e03cb-0ccc-47a0-ae78-a1954c37b5c2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.567687ms
Sep 16 13:31:16.295: INFO: Pod "security-context-c96e03cb-0ccc-47a0-ae78-a1954c37b5c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.033062253s
STEP: Saw pod success
Sep 16 13:31:16.295: INFO: Pod "security-context-c96e03cb-0ccc-47a0-ae78-a1954c37b5c2" satisfied condition "Succeeded or Failed"
Sep 16 13:31:16.317: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod security-context-c96e03cb-0ccc-47a0-ae78-a1954c37b5c2 container test-container: <nil>
STEP: delete the pod
Sep 16 13:31:16.571: INFO: Waiting for pod security-context-c96e03cb-0ccc-47a0-ae78-a1954c37b5c2 to disappear
Sep 16 13:31:16.637: INFO: Pod security-context-c96e03cb-0ccc-47a0-ae78-a1954c37b5c2 no longer exists
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:31:16.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9732" for this suite.
•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":326,"skipped":6044,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 33 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":327,"skipped":6076,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 13:31:37.341: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b99c8af-ae9e-4852-82d6-dfdcbc8d2a37" in namespace "downward-api-5260" to be "Succeeded or Failed"
Sep 16 13:31:37.348: INFO: Pod "downwardapi-volume-3b99c8af-ae9e-4852-82d6-dfdcbc8d2a37": Phase="Pending", Reason="", readiness=false. Elapsed: 7.168002ms
Sep 16 13:31:39.353: INFO: Pod "downwardapi-volume-3b99c8af-ae9e-4852-82d6-dfdcbc8d2a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011952452s
STEP: Saw pod success
Sep 16 13:31:39.353: INFO: Pod "downwardapi-volume-3b99c8af-ae9e-4852-82d6-dfdcbc8d2a37" satisfied condition "Succeeded or Failed"
Sep 16 13:31:39.357: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod downwardapi-volume-3b99c8af-ae9e-4852-82d6-dfdcbc8d2a37 container client-container: <nil>
STEP: delete the pod
Sep 16 13:31:39.383: INFO: Waiting for pod downwardapi-volume-3b99c8af-ae9e-4852-82d6-dfdcbc8d2a37 to disappear
Sep 16 13:31:39.387: INFO: Pod downwardapi-volume-3b99c8af-ae9e-4852-82d6-dfdcbc8d2a37 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:31:39.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5260" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":328,"skipped":6082,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 5 lines ...
[BeforeEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 13:31:39.478: INFO: The status of Pod server-envvars-2012d7e8-232c-4832-b003-8e136de5763b is Pending, waiting for it to be Running (with Ready = true)
Sep 16 13:31:41.483: INFO: The status of Pod server-envvars-2012d7e8-232c-4832-b003-8e136de5763b is Running (Ready = true)
Sep 16 13:31:41.524: INFO: Waiting up to 5m0s for pod "client-envvars-c3a16b72-a9ff-45b3-bd1e-92255863203f" in namespace "pods-9634" to be "Succeeded or Failed"
Sep 16 13:31:41.529: INFO: Pod "client-envvars-c3a16b72-a9ff-45b3-bd1e-92255863203f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282383ms
Sep 16 13:31:43.534: INFO: Pod "client-envvars-c3a16b72-a9ff-45b3-bd1e-92255863203f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009665659s
STEP: Saw pod success
Sep 16 13:31:43.534: INFO: Pod "client-envvars-c3a16b72-a9ff-45b3-bd1e-92255863203f" satisfied condition "Succeeded or Failed"
Sep 16 13:31:43.537: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod client-envvars-c3a16b72-a9ff-45b3-bd1e-92255863203f container env3cont: <nil>
STEP: delete the pod
Sep 16 13:31:43.566: INFO: Waiting for pod client-envvars-c3a16b72-a9ff-45b3-bd1e-92255863203f to disappear
Sep 16 13:31:43.572: INFO: Pod client-envvars-c3a16b72-a9ff-45b3-bd1e-92255863203f no longer exists
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:31:43.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9634" for this suite.
•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":329,"skipped":6109,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 16 13:31:43.584: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 16 13:31:43.657: INFO: Waiting up to 5m0s for pod "downward-api-dda80fcb-5b9f-4a61-8520-5286b5ba8117" in namespace "downward-api-7850" to be "Succeeded or Failed"
Sep 16 13:31:43.666: INFO: Pod "downward-api-dda80fcb-5b9f-4a61-8520-5286b5ba8117": Phase="Pending", Reason="", readiness=false. Elapsed: 8.918075ms
Sep 16 13:31:45.670: INFO: Pod "downward-api-dda80fcb-5b9f-4a61-8520-5286b5ba8117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01309275s
STEP: Saw pod success
Sep 16 13:31:45.670: INFO: Pod "downward-api-dda80fcb-5b9f-4a61-8520-5286b5ba8117" satisfied condition "Succeeded or Failed"
Sep 16 13:31:45.673: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod downward-api-dda80fcb-5b9f-4a61-8520-5286b5ba8117 container dapi-container: <nil>
STEP: delete the pod
Sep 16 13:31:45.714: INFO: Waiting for pod downward-api-dda80fcb-5b9f-4a61-8520-5286b5ba8117 to disappear
Sep 16 13:31:45.727: INFO: Pod downward-api-dda80fcb-5b9f-4a61-8520-5286b5ba8117 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:31:45.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7850" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":330,"skipped":6167,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should delete a collection of pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 17 lines ...
Sep 16 13:31:47.948: INFO: Pod quantity 2 is different from expected quantity 0
Sep 16 13:31:48.953: INFO: Pod quantity 2 is different from expected quantity 0
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:31:49.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1729" for this suite.
•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":331,"skipped":6222,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 2 lines ...
Sep 16 13:31:49.959: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 13:31:50.081: INFO: created pod
Sep 16 13:31:50.081: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-4323" to be "Succeeded or Failed"
Sep 16 13:31:50.090: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.179055ms
Sep 16 13:31:52.095: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013983707s
STEP: Saw pod success
Sep 16 13:31:52.095: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Sep 16 13:32:22.097: INFO: polling logs
Sep 16 13:32:22.107: INFO: Pod logs: 
2021/09/16 13:31:51 OK: Got token
2021/09/16 13:31:51 validating with in-cluster discovery
2021/09/16 13:31:51 OK: got issuer https://kubernetes.default.svc.cluster.local
2021/09/16 13:31:51 Full, not-validated claims: 
... skipping 13 lines ...
• [SLOW TEST:32.165 seconds]
[sig-auth] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":332,"skipped":6234,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-4bc11990-e51b-4ce8-818b-38da8b38fb84
STEP: Creating a pod to test consume configMaps
Sep 16 13:32:22.201: INFO: Waiting up to 5m0s for pod "pod-configmaps-b11859b4-3911-4a47-b92a-8e8d603735fc" in namespace "configmap-9464" to be "Succeeded or Failed"
Sep 16 13:32:22.209: INFO: Pod "pod-configmaps-b11859b4-3911-4a47-b92a-8e8d603735fc": Phase="Pending", Reason="", readiness=false. Elapsed: 7.947607ms
Sep 16 13:32:24.212: INFO: Pod "pod-configmaps-b11859b4-3911-4a47-b92a-8e8d603735fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011486822s
STEP: Saw pod success
Sep 16 13:32:24.212: INFO: Pod "pod-configmaps-b11859b4-3911-4a47-b92a-8e8d603735fc" satisfied condition "Succeeded or Failed"
Sep 16 13:32:24.216: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-configmaps-b11859b4-3911-4a47-b92a-8e8d603735fc container agnhost-container: <nil>
STEP: delete the pod
Sep 16 13:32:24.242: INFO: Waiting for pod pod-configmaps-b11859b4-3911-4a47-b92a-8e8d603735fc to disappear
Sep 16 13:32:24.248: INFO: Pod pod-configmaps-b11859b4-3911-4a47-b92a-8e8d603735fc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:32:24.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9464" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":333,"skipped":6264,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-9294/configmap-test-973f1a12-84e2-412c-9dab-f03f9db81405
STEP: Creating a pod to test consume configMaps
Sep 16 13:32:24.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-d066239e-0c2f-4327-8ec9-baf19da2518c" in namespace "configmap-9294" to be "Succeeded or Failed"
Sep 16 13:32:24.349: INFO: Pod "pod-configmaps-d066239e-0c2f-4327-8ec9-baf19da2518c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.154946ms
Sep 16 13:32:26.354: INFO: Pod "pod-configmaps-d066239e-0c2f-4327-8ec9-baf19da2518c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01249599s
STEP: Saw pod success
Sep 16 13:32:26.354: INFO: Pod "pod-configmaps-d066239e-0c2f-4327-8ec9-baf19da2518c" satisfied condition "Succeeded or Failed"
Sep 16 13:32:26.359: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-configmaps-d066239e-0c2f-4327-8ec9-baf19da2518c container env-test: <nil>
STEP: delete the pod
Sep 16 13:32:26.385: INFO: Waiting for pod pod-configmaps-d066239e-0c2f-4327-8ec9-baf19da2518c to disappear
Sep 16 13:32:26.395: INFO: Pod pod-configmaps-d066239e-0c2f-4327-8ec9-baf19da2518c no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:32:26.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9294" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":334,"skipped":6273,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 28 lines ...
• [SLOW TEST:7.194 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":335,"skipped":6298,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 109 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":336,"skipped":6319,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSS
------------------------------
[sig-apps] Deployment 
  should run the lifecycle of a Deployment [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 113 lines ...
• [SLOW TEST:8.963 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":337,"skipped":6322,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Sep 16 13:33:59.928: INFO: Running '/logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubectl exec --namespace=svcaccounts-8867 pod-service-account-d5baa210-260f-4ee6-a565-52447adf44ba -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:34:00.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8867" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":346,"completed":338,"skipped":6354,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-81faed0b-268f-41b5-9d77-cfa4c0c89c50
STEP: Creating a pod to test consume secrets
Sep 16 13:34:00.290: INFO: Waiting up to 5m0s for pod "pod-secrets-3bf68a43-4efe-4a90-9412-e07c33ccff55" in namespace "secrets-4901" to be "Succeeded or Failed"
Sep 16 13:34:00.296: INFO: Pod "pod-secrets-3bf68a43-4efe-4a90-9412-e07c33ccff55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.68436ms
Sep 16 13:34:02.304: INFO: Pod "pod-secrets-3bf68a43-4efe-4a90-9412-e07c33ccff55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014327859s
STEP: Saw pod success
Sep 16 13:34:02.304: INFO: Pod "pod-secrets-3bf68a43-4efe-4a90-9412-e07c33ccff55" satisfied condition "Succeeded or Failed"
Sep 16 13:34:02.310: INFO: Trying to get logs from node kt2-5be7f4b0-16de-minion-group-lhnl pod pod-secrets-3bf68a43-4efe-4a90-9412-e07c33ccff55 container secret-volume-test: <nil>
STEP: delete the pod
Sep 16 13:34:02.486: INFO: Waiting for pod pod-secrets-3bf68a43-4efe-4a90-9412-e07c33ccff55 to disappear
Sep 16 13:34:02.493: INFO: Pod pod-secrets-3bf68a43-4efe-4a90-9412-e07c33ccff55 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:34:02.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4901" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":339,"skipped":6361,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 16 13:34:02.516: INFO: >>> kubeConfig: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod with failed condition
I0916 13:35:24.566797    2874 boskos.go:86] Sending heartbeat to Boskos
STEP: updating the pod
Sep 16 13:36:03.145: INFO: Successfully updated pod "var-expansion-abd2aa3d-75de-41ab-8442-f759bf608aae"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Sep 16 13:36:05.164: INFO: Deleting pod "var-expansion-abd2aa3d-75de-41ab-8442-f759bf608aae" in namespace "var-expansion-2897"
... skipping 6 lines ...
• [SLOW TEST:154.685 seconds]
[sig-node] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":340,"skipped":6385,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSliceMirroring 
  should mirror a custom Endpoints resource through create update and delete [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSliceMirroring
... skipping 12 lines ...
STEP: mirroring deletion of a custom Endpoint
Sep 16 13:36:39.321: INFO: Waiting for 0 EndpointSlices to exist, got 1
[AfterEach] [sig-network] EndpointSliceMirroring
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 13:36:41.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-8167" for this suite.
•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":341,"skipped":6443,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 18 lines ...
• [SLOW TEST:6.701 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":342,"skipped":6463,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 52 lines ...
• [SLOW TEST:13.015 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":343,"skipped":6502,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}
SSSSSep 16 13:37:01.054: INFO: Running AfterSuite actions on all nodes
Sep 16 13:37:01.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2
Sep 16 13:37:01.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Sep 16 13:37:01.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Sep 16 13:37:01.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Sep 16 13:37:01.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Sep 16 13:37:01.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Sep 16 13:37:01.054: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3
Sep 16 13:37:01.054: INFO: Running AfterSuite actions on node 1
Sep 16 13:37:01.054: INFO: Dumping logs locally to: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d
Sep 16 13:37:01.055: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

JUnit report was created: /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/junit_01.xml
{"msg":"Test Suite completed","total":346,"completed":343,"skipped":6506,"failed":3,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]"]}


Summarizing 3 Failures:

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that NodeSelector is respected if not matching  [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run  [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323

[Fail] [sig-network] Networking Granular Checks: Pods [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/network/utils.go:858

Ran 346 of 6852 Specs in 7199.730 seconds
FAIL! -- 343 Passed | 3 Failed | 0 Pending | 6506 Skipped
--- FAIL: TestE2E (7201.76s)
FAIL

Ginkgo ran 1 suite in 2h0m1.875261659s
Test Suite Failed
F0916 13:37:01.144924   96824 ginkgo.go:205] failed to run ginkgo tester: exit status 1
I0916 13:37:01.150118    2874 down.go:29] GCE deployer starting Down()
I0916 13:37:01.150159    2874 common.go:204] checking locally built kubectl ...
I0916 13:37:01.150524    2874 down.go:43] About to run script at: /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh
I0916 13:37:01.150543    2874 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh 
Bringing down cluster using provider: gce
... calling verify-prereqs
... skipping 38 lines ...
Property "users.k8s-infra-e2e-boskos-038_kt2-5be7f4b0-16de-basic-auth" unset.
Property "contexts.k8s-infra-e2e-boskos-038_kt2-5be7f4b0-16de" unset.
Cleared config for k8s-infra-e2e-boskos-038_kt2-5be7f4b0-16de from /logs/artifacts/5be7f4b0-16de-11ec-a0c8-aafcb65c973d/kubetest2-kubeconfig
Done
I0916 13:42:57.759516    2874 down.go:53] about to delete nodeport firewall rule
I0916 13:42:57.759623    2874 local.go:42] ⚙️ gcloud compute firewall-rules delete --project k8s-infra-e2e-boskos-038 kt2-5be7f4b0-16de-minion-nodeports
ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-boskos-038/global/firewalls/kt2-5be7f4b0-16de-minion-nodeports' was not found

W0916 13:42:58.791307    2874 firewall.go:62] failed to delete nodeports firewall rules: might be deleted already?
I0916 13:42:58.791348    2874 down.go:59] releasing boskos project
I0916 13:42:58.810717    2874 boskos.go:83] Boskos heartbeat func received signal to close
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
70e4bf1e2ac9
... skipping 4 lines ...