This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-17 02:11
Elapsed2h35m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 349 lines ...
Trying to find master named 'kt2-4d7c9b85-175c-master'
Looking for address 'kt2-4d7c9b85-175c-master-ip'
Using master: kt2-4d7c9b85-175c-master (external IP: 34.69.105.80; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

.......Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-119_kt2-4d7c9b85-175c" set.
User "k8s-infra-e2e-boskos-119_kt2-4d7c9b85-175c" set.
Context "k8s-infra-e2e-boskos-119_kt2-4d7c9b85-175c" created.
Switched to context "k8s-infra-e2e-boskos-119_kt2-4d7c9b85-175c".
... skipping 25 lines ...
kt2-4d7c9b85-175c-minion-group-94gp   Ready                      <none>   16s   v1.23.0-alpha.2.69+2f10e6587c07ef
kt2-4d7c9b85-175c-minion-group-b90v   Ready                      <none>   18s   v1.23.0-alpha.2.69+2f10e6587c07ef
kt2-4d7c9b85-175c-minion-group-n0sz   Ready                      <none>   18s   v1.23.0-alpha.2.69+2f10e6587c07ef
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 40 lines ...
Specify --start=53076 in the next get-serial-port-output invocation to get only the new output starting from here.
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/cluster-logs'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-4d7c9b85-175c-minion-group-b90v
... skipping 7 lines ...
Specify --start=103543 in the next get-serial-port-output invocation to get only the new output starting from here.
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=kt2-4d7c9b85-175c-minion-group
NODE_NAMES=kt2-4d7c9b85-175c-minion-group-94gp kt2-4d7c9b85-175c-minion-group-b90v kt2-4d7c9b85-175c-minion-group-n0sz
Failures for kt2-4d7c9b85-175c-minion-group (if any):
I0917 02:40:28.828081    2890 dumplogs.go:121] About to run: [/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl cluster-info dump]
I0917 02:40:28.828191    2890 local.go:42] ⚙️ /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl cluster-info dump
I0917 02:40:29.911080    2890 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-ginkgo ; --focus-regex=\[Conformance\] ; --use-built-binaries
I0917 02:40:30.016330   97107 ginkgo.go:120] Using kubeconfig at /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
I0917 02:40:30.016516   97107 ginkgo.go:90] Running ginkgo test as /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/ginkgo [--nodes=1 /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/e2e.test -- --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --kubectl-path=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --ginkgo.flakeAttempts=1 --ginkgo.skip= --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1]
Sep 17 02:40:30.109: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0917 02:40:30.110168   97125 e2e.go:127] Starting e2e run "6139c2b9-4f97-4733-badf-c136f982b8c8" on Ginkgo node 1
{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1631846430 - Will randomize all specs
Will run 346 of 6851 specs

Sep 17 02:40:32.107: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Sep 17 02:40:32.110: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Sep 17 02:40:32.128: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep 17 02:40:32.163: INFO: The status of Pod l7-default-backend-79858d8f86-n2w4m is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:32.163: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-lxnhz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:32.163: INFO: 27 / 30 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep 17 02:40:32.163: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 17 02:40:32.163: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 17 02:40:32.163: INFO: l7-default-backend-79858d8f86-n2w4m     kt2-4d7c9b85-175c-minion-group-n0sz  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:53 +0000 UTC  }]
Sep 17 02:40:32.163: INFO: metrics-server-v0.5.0-6554f5dbd8-lxnhz  kt2-4d7c9b85-175c-minion-group-94gp  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC  }]
Sep 17 02:40:32.163: INFO: 
Sep 17 02:40:34.191: INFO: The status of Pod kube-addon-manager-kt2-4d7c9b85-175c-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:34.191: INFO: The status of Pod l7-default-backend-79858d8f86-n2w4m is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:34.191: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-lxnhz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:34.191: INFO: 27 / 31 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Sep 17 02:40:34.191: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 17 02:40:34.191: INFO: POD                                          NODE                                 PHASE    GRACE  CONDITIONS
Sep 17 02:40:34.191: INFO: kube-addon-manager-kt2-4d7c9b85-175c-master  kt2-4d7c9b85-175c-master             Pending         []
Sep 17 02:40:34.191: INFO: l7-default-backend-79858d8f86-n2w4m          kt2-4d7c9b85-175c-minion-group-n0sz  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:53 +0000 UTC  }]
Sep 17 02:40:34.191: INFO: metrics-server-v0.5.0-6554f5dbd8-lxnhz       kt2-4d7c9b85-175c-minion-group-94gp  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:39:17 +0000 UTC  }]
Sep 17 02:40:34.192: INFO: 
Sep 17 02:40:36.192: INFO: The status of Pod konnectivity-server-kt2-4d7c9b85-175c-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:36.192: INFO: The status of Pod kube-addon-manager-kt2-4d7c9b85-175c-master is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:36.192: INFO: The status of Pod l7-default-backend-79858d8f86-n2w4m is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:36.193: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-lxnhz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 17 02:40:36.193: INFO: 27 / 32 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Sep 17 02:40:36.193: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 17 02:40:36.193: INFO: POD                                           NODE                                 PHASE    GRACE  CONDITIONS
Sep 17 02:40:36.193: INFO: konnectivity-server-kt2-4d7c9b85-175c-master  kt2-4d7c9b85-175c-master             Pending         []
Sep 17 02:40:36.193: INFO: kube-addon-manager-kt2-4d7c9b85-175c-master   kt2-4d7c9b85-175c-master             Pending         []
Sep 17 02:40:36.193: INFO: l7-default-backend-79858d8f86-n2w4m           kt2-4d7c9b85-175c-minion-group-n0sz  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:54 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-17 02:38:53 +0000 UTC  }]
... skipping 608 lines ...
Sep 17 02:45:38.210: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'metadata-proxy-v0.1' (300 seconds elapsed)
Sep 17 02:45:38.210: INFO: there are not ready daemonsets: [konnectivity-agent]
Sep 17 02:45:38.214: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'fluentd-gcp-v3.2.0' (300 seconds elapsed)
Sep 17 02:45:38.214: INFO: 3 / 4 pods ready in namespace 'kube-system' in daemonset 'konnectivity-agent' (300 seconds elapsed)
Sep 17 02:45:38.214: INFO: 4 / 4 pods ready in namespace 'kube-system' in daemonset 'metadata-proxy-v0.1' (300 seconds elapsed)
Sep 17 02:45:38.214: INFO: there are not ready daemonsets: [konnectivity-agent]
Sep 17 02:45:38.214: INFO: WARNING: Waiting for all daemonsets to be ready failed: timed out waiting for the condition
Sep 17 02:45:38.214: INFO: e2e test version: v1.23.0-alpha.2.69+2f10e6587c07ef
Sep 17 02:45:38.215: INFO: kube-apiserver version: v1.23.0-alpha.2.69+2f10e6587c07ef
Sep 17 02:45:38.215: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Sep 17 02:45:38.220: INFO: Cluster IP family: ipv4
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
... skipping 33 lines ...
• [SLOW TEST:22.063 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":1,"skipped":32,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 50 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":2,"skipped":45,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 34 lines ...
• [SLOW TEST:9.130 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":3,"skipped":56,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Ingress API 
  should support creating Ingress API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Ingress API
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Ingress API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:46:34.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-7494" for this suite.
•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":4,"skipped":72,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 5 lines ...
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
[It] should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating service multi-endpoint-test in namespace services-1907
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1907 to expose endpoints map[]
Sep 17 02:46:34.341: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found
Sep 17 02:46:35.368: INFO: successfully validated that service multi-endpoint-test in namespace services-1907 exposes endpoints map[]
STEP: Creating pod pod1 in namespace services-1907
Sep 17 02:46:35.401: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 02:46:37.406: INFO: The status of Pod pod1 is Running (Ready = true)
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1907 to expose endpoints map[pod1:[100]]
Sep 17 02:46:37.415: INFO: successfully validated that service multi-endpoint-test in namespace services-1907 exposes endpoints map[pod1:[100]]
... skipping 3 lines ...
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1907 to expose endpoints map[pod1:[100] pod2:[101]]
Sep 17 02:46:39.485: INFO: successfully validated that service multi-endpoint-test in namespace services-1907 exposes endpoints map[pod1:[100] pod2:[101]]
STEP: Checking if the Service forwards traffic to pods
Sep 17 02:46:39.485: INFO: Creating new exec pod
Sep 17 02:46:42.502: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-1907 exec execpod2fb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Sep 17 02:46:43.671: INFO: rc: 1
Sep 17 02:46:43.671: INFO: Service reachability failing with error: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-1907 exec execpod2fb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 02:46:44.672: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-1907 exec execpod2fb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Sep 17 02:46:44.847: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\n+ echo hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n"
Sep 17 02:46:44.847: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 17 02:46:44.847: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-1907 exec execpod2fb7w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.26.134 80'
... skipping 21 lines ...
• [SLOW TEST:12.500 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":346,"completed":5,"skipped":129,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 02:46:46.827: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c0ccef2-57b7-4d6c-adc7-2134c3be997f" in namespace "downward-api-8109" to be "Succeeded or Failed"
Sep 17 02:46:46.834: INFO: Pod "downwardapi-volume-8c0ccef2-57b7-4d6c-adc7-2134c3be997f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.373243ms
Sep 17 02:46:48.837: INFO: Pod "downwardapi-volume-8c0ccef2-57b7-4d6c-adc7-2134c3be997f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009868842s
STEP: Saw pod success
Sep 17 02:46:48.837: INFO: Pod "downwardapi-volume-8c0ccef2-57b7-4d6c-adc7-2134c3be997f" satisfied condition "Succeeded or Failed"
Sep 17 02:46:48.839: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod downwardapi-volume-8c0ccef2-57b7-4d6c-adc7-2134c3be997f container client-container: <nil>
STEP: delete the pod
Sep 17 02:46:48.870: INFO: Waiting for pod downwardapi-volume-8c0ccef2-57b7-4d6c-adc7-2134c3be997f to disappear
Sep 17 02:46:48.874: INFO: Pod downwardapi-volume-8c0ccef2-57b7-4d6c-adc7-2134c3be997f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:46:48.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8109" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":6,"skipped":167,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 02:46:48.880: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 02:46:48.920: INFO: Waiting up to 5m0s for pod "downward-api-bd1b8661-71ad-4140-bcc3-e418b89a055f" in namespace "downward-api-1940" to be "Succeeded or Failed"
Sep 17 02:46:48.930: INFO: Pod "downward-api-bd1b8661-71ad-4140-bcc3-e418b89a055f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.843139ms
Sep 17 02:46:50.938: INFO: Pod "downward-api-bd1b8661-71ad-4140-bcc3-e418b89a055f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017256899s
STEP: Saw pod success
Sep 17 02:46:50.938: INFO: Pod "downward-api-bd1b8661-71ad-4140-bcc3-e418b89a055f" satisfied condition "Succeeded or Failed"
Sep 17 02:46:50.941: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod downward-api-bd1b8661-71ad-4140-bcc3-e418b89a055f container dapi-container: <nil>
STEP: delete the pod
Sep 17 02:46:50.966: INFO: Waiting for pod downward-api-bd1b8661-71ad-4140-bcc3-e418b89a055f to disappear
Sep 17 02:46:50.970: INFO: Pod downward-api-bd1b8661-71ad-4140-bcc3-e418b89a055f no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:46:50.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1940" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":7,"skipped":179,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 11 lines ...
STEP: Updating configmap configmap-test-upd-6d96fc86-22b4-42d1-9c9a-09c1e57a29fc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:46:55.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2029" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":8,"skipped":203,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] version v1
... skipping 38 lines ...
Sep 17 02:46:57.281: INFO: Starting http.Client for https://34.69.105.80/api/v1/namespaces/proxy-5999/services/test-service/proxy/some/path/with/PUT
Sep 17 02:46:57.286: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT
[AfterEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:46:57.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5999" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":9,"skipped":221,"failed":0}
SSSS
------------------------------
[sig-node] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 17 02:46:57.336: INFO: The status of Pod busybox-readonly-fs9ad628c1-9efd-4730-8abd-8a144bb55b15 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 02:46:59.339: INFO: The status of Pod busybox-readonly-fs9ad628c1-9efd-4730-8abd-8a144bb55b15 is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:46:59.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6326" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":10,"skipped":225,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:46:59.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2517" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":346,"completed":11,"skipped":254,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Sep 17 02:47:01.780: INFO: stderr: ""
Sep 17 02:47:01.780: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:47:01.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5117" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":12,"skipped":259,"failed":0}
SSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-1885/configmap-test-1637c387-23ac-45c4-890e-3205cdf2ee2b
STEP: Creating a pod to test consume configMaps
Sep 17 02:47:01.836: INFO: Waiting up to 5m0s for pod "pod-configmaps-ea2b6984-056d-4fc2-8903-c62052c92005" in namespace "configmap-1885" to be "Succeeded or Failed"
Sep 17 02:47:01.841: INFO: Pod "pod-configmaps-ea2b6984-056d-4fc2-8903-c62052c92005": Phase="Pending", Reason="", readiness=false. Elapsed: 4.448212ms
Sep 17 02:47:03.847: INFO: Pod "pod-configmaps-ea2b6984-056d-4fc2-8903-c62052c92005": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010446606s
STEP: Saw pod success
Sep 17 02:47:03.847: INFO: Pod "pod-configmaps-ea2b6984-056d-4fc2-8903-c62052c92005" satisfied condition "Succeeded or Failed"
Sep 17 02:47:03.852: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-configmaps-ea2b6984-056d-4fc2-8903-c62052c92005 container env-test: <nil>
STEP: delete the pod
Sep 17 02:47:03.869: INFO: Waiting for pod pod-configmaps-ea2b6984-056d-4fc2-8903-c62052c92005 to disappear
Sep 17 02:47:03.873: INFO: Pod pod-configmaps-ea2b6984-056d-4fc2-8903-c62052c92005 no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:47:03.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1885" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":13,"skipped":263,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 36 lines ...
• [SLOW TEST:14.377 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":14,"skipped":266,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:47:18.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-5419" for this suite.
STEP: Destroying namespace "nspatchtest-9bfb86e5-92ca-4283-9ab6-aefd346a922f-3193" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":15,"skipped":294,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 21 lines ...
• [SLOW TEST:11.312 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":16,"skipped":337,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-60a739b8-d70c-4c35-9deb-7b27dc7f07fd
STEP: Creating a pod to test consume configMaps
Sep 17 02:47:29.754: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ea2c826-0cd4-467a-9c1b-743e43cae833" in namespace "configmap-919" to be "Succeeded or Failed"
Sep 17 02:47:29.757: INFO: Pod "pod-configmaps-0ea2c826-0cd4-467a-9c1b-743e43cae833": Phase="Pending", Reason="", readiness=false. Elapsed: 3.142789ms
Sep 17 02:47:31.762: INFO: Pod "pod-configmaps-0ea2c826-0cd4-467a-9c1b-743e43cae833": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007497093s
STEP: Saw pod success
Sep 17 02:47:31.762: INFO: Pod "pod-configmaps-0ea2c826-0cd4-467a-9c1b-743e43cae833" satisfied condition "Succeeded or Failed"
Sep 17 02:47:31.764: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-configmaps-0ea2c826-0cd4-467a-9c1b-743e43cae833 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 02:47:31.792: INFO: Waiting for pod pod-configmaps-0ea2c826-0cd4-467a-9c1b-743e43cae833 to disappear
Sep 17 02:47:31.795: INFO: Pod pod-configmaps-0ea2c826-0cd4-467a-9c1b-743e43cae833 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:47:31.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-919" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":17,"skipped":370,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 29 lines ...
• [SLOW TEST:10.078 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":18,"skipped":414,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 02:47:41.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5" in namespace "downward-api-5878" to be "Succeeded or Failed"
Sep 17 02:47:41.941: INFO: Pod "downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 16.595152ms
Sep 17 02:47:43.945: INFO: Pod "downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020518423s
Sep 17 02:47:45.950: INFO: Pod "downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025743013s
STEP: Saw pod success
Sep 17 02:47:45.950: INFO: Pod "downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5" satisfied condition "Succeeded or Failed"
Sep 17 02:47:45.954: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5 container client-container: <nil>
STEP: delete the pod
Sep 17 02:47:45.977: INFO: Waiting for pod downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5 to disappear
Sep 17 02:47:45.986: INFO: Pod downwardapi-volume-99a4591f-552f-4f05-a2e7-1b6b7a005cd5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:47:45.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5878" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":19,"skipped":472,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] KubeletManagedEtcHosts
... skipping 53 lines ...
• [SLOW TEST:5.606 seconds]
[sig-node] KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":20,"skipped":512,"failed":0}
SSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 19 lines ...
• [SLOW TEST:242.677 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":21,"skipped":518,"failed":0}
SSSSSSS
------------------------------
[sig-apps] CronJob 
  should replace jobs when ReplaceConcurrent [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 19 lines ...
• [SLOW TEST:66.112 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":22,"skipped":525,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-582ece18-9226-478f-9b89-8175f2e761f1
STEP: Creating a pod to test consume configMaps
Sep 17 02:53:00.483: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c02fb4dd-6e4c-4492-a060-fe2f8f935df1" in namespace "projected-5478" to be "Succeeded or Failed"
Sep 17 02:53:00.492: INFO: Pod "pod-projected-configmaps-c02fb4dd-6e4c-4492-a060-fe2f8f935df1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.25239ms
Sep 17 02:53:02.496: INFO: Pod "pod-projected-configmaps-c02fb4dd-6e4c-4492-a060-fe2f8f935df1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013797841s
STEP: Saw pod success
Sep 17 02:53:02.497: INFO: Pod "pod-projected-configmaps-c02fb4dd-6e4c-4492-a060-fe2f8f935df1" satisfied condition "Succeeded or Failed"
Sep 17 02:53:02.499: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-configmaps-c02fb4dd-6e4c-4492-a060-fe2f8f935df1 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 02:53:02.535: INFO: Waiting for pod pod-projected-configmaps-c02fb4dd-6e4c-4492-a060-fe2f8f935df1 to disappear
Sep 17 02:53:02.538: INFO: Pod pod-projected-configmaps-c02fb4dd-6e4c-4492-a060-fe2f8f935df1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:53:02.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5478" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":23,"skipped":531,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 14 lines ...
STEP: Creating secret with name s-test-opt-create-2b26490d-f20d-4a2b-9873-74193661fc1e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:53:06.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8698" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":24,"skipped":547,"failed":0}

------------------------------
[sig-node] Security Context 
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 2 lines ...
Sep 17 02:53:06.791: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 17 02:53:06.827: INFO: Waiting up to 5m0s for pod "security-context-b759bc8c-5c16-4707-a2b0-3c646c396a7f" in namespace "security-context-9535" to be "Succeeded or Failed"
Sep 17 02:53:06.836: INFO: Pod "security-context-b759bc8c-5c16-4707-a2b0-3c646c396a7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.610557ms
Sep 17 02:53:08.839: INFO: Pod "security-context-b759bc8c-5c16-4707-a2b0-3c646c396a7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012147691s
STEP: Saw pod success
Sep 17 02:53:08.839: INFO: Pod "security-context-b759bc8c-5c16-4707-a2b0-3c646c396a7f" satisfied condition "Succeeded or Failed"
Sep 17 02:53:08.841: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod security-context-b759bc8c-5c16-4707-a2b0-3c646c396a7f container test-container: <nil>
STEP: delete the pod
Sep 17 02:53:08.872: INFO: Waiting for pod security-context-b759bc8c-5c16-4707-a2b0-3c646c396a7f to disappear
Sep 17 02:53:08.875: INFO: Pod security-context-b759bc8c-5c16-4707-a2b0-3c646c396a7f no longer exists
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:53:08.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9535" for this suite.
•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":25,"skipped":547,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 17 02:53:10.958: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:53:10.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5684" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":26,"skipped":613,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 29 lines ...
• [SLOW TEST:15.542 seconds]
[sig-api-machinery] Aggregator
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":27,"skipped":625,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 15 lines ...
• [SLOW TEST:15.514 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":28,"skipped":643,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 02:53:42.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-910352d8-8893-44d9-a4db-1a83ac5cb446" in namespace "downward-api-5849" to be "Succeeded or Failed"
Sep 17 02:53:42.104: INFO: Pod "downwardapi-volume-910352d8-8893-44d9-a4db-1a83ac5cb446": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102233ms
Sep 17 02:53:44.109: INFO: Pod "downwardapi-volume-910352d8-8893-44d9-a4db-1a83ac5cb446": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008160623s
STEP: Saw pod success
Sep 17 02:53:44.109: INFO: Pod "downwardapi-volume-910352d8-8893-44d9-a4db-1a83ac5cb446" satisfied condition "Succeeded or Failed"
Sep 17 02:53:44.111: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-910352d8-8893-44d9-a4db-1a83ac5cb446 container client-container: <nil>
STEP: delete the pod
Sep 17 02:53:44.128: INFO: Waiting for pod downwardapi-volume-910352d8-8893-44d9-a4db-1a83ac5cb446 to disappear
Sep 17 02:53:44.131: INFO: Pod downwardapi-volume-910352d8-8893-44d9-a4db-1a83ac5cb446 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:53:44.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5849" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":29,"skipped":652,"failed":0}
SSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 17 02:53:46.194: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:53:46.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-718" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":30,"skipped":659,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] LimitRange
... skipping 38 lines ...
• [SLOW TEST:7.245 seconds]
[sig-scheduling] LimitRange
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":31,"skipped":678,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 14 lines ...
STEP: Creating secret with name s-test-opt-create-73e7d2b2-c17a-46a9-97f9-b4443d76355f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:53:57.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8292" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":32,"skipped":684,"failed":0}

------------------------------
[sig-apps] ReplicaSet 
  Replicaset should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 20 lines ...
• [SLOW TEST:5.128 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":33,"skipped":684,"failed":0}
SSSS
------------------------------
[sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 27 lines ...
• [SLOW TEST:135.395 seconds]
[sig-node] NoExecuteTaintManager Single Pod [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":34,"skipped":688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 65 lines ...
• [SLOW TEST:12.521 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":35,"skipped":713,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-4ffd9a9a-4a82-4e6d-a076-9c772cbcc53c
STEP: Creating a pod to test consume secrets
Sep 17 02:56:30.812: INFO: Waiting up to 5m0s for pod "pod-secrets-61edcf02-f6d2-4987-9dfa-49333e262ab8" in namespace "secrets-8311" to be "Succeeded or Failed"
Sep 17 02:56:30.820: INFO: Pod "pod-secrets-61edcf02-f6d2-4987-9dfa-49333e262ab8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.428171ms
Sep 17 02:56:32.823: INFO: Pod "pod-secrets-61edcf02-f6d2-4987-9dfa-49333e262ab8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01105581s
STEP: Saw pod success
Sep 17 02:56:32.823: INFO: Pod "pod-secrets-61edcf02-f6d2-4987-9dfa-49333e262ab8" satisfied condition "Succeeded or Failed"
Sep 17 02:56:32.826: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-secrets-61edcf02-f6d2-4987-9dfa-49333e262ab8 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 02:56:32.856: INFO: Waiting for pod pod-secrets-61edcf02-f6d2-4987-9dfa-49333e262ab8 to disappear
Sep 17 02:56:32.859: INFO: Pod pod-secrets-61edcf02-f6d2-4987-9dfa-49333e262ab8 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:56:32.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8311" for this suite.
STEP: Destroying namespace "secret-namespace-9504" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":36,"skipped":724,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:56:34.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7996" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":37,"skipped":728,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-instrumentation] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:56:34.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-4769" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":38,"skipped":755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Sep 17 02:56:36.463: INFO: Pod "test-recreate-deployment-785fd889-wk6ch" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-785fd889-wk6ch test-recreate-deployment-785fd889- deployment-6891  35fbd22c-f3d3-4e64-8363-1dd6670bed11 4251 0 2021-09-17 02:56:36 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:785fd889] map[] [{apps/v1 ReplicaSet test-recreate-deployment-785fd889 bb532727-16e7-415c-8e8b-a952a0c44c67 0xc00190bdef 0xc00190be00}] []  [{kube-controller-manager Update v1 2021-09-17 02:56:36 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bb532727-16e7-415c-8e8b-a952a0c44c67\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 02:56:36 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-trcxj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-trcxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-4d7c9b85-175c-minion-group-94gp,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 02:56:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 02:56:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 02:56:36 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 02:56:36 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:,StartTime:2021-09-17 02:56:36 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 02:56:36.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6891" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":39,"skipped":797,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 86 lines ...
Sep 17 02:57:38.412: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Sep 17 02:57:38.413: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Sep 17 02:57:38.413: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Sep 17 02:57:38.413: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:57:38.762: INFO: rc: 1
Sep 17 02:57:38.762: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "2b92ab25e5ac7c72d6ab176c757fa727053b32307da2dfe5aaccda81debd8555": cannot exec in a stopped state: unknown

error:
exit status 1
Sep 17 02:57:48.763: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:57:48.824: INFO: rc: 1
Sep 17 02:57:48.824: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:57:58.825: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:57:58.886: INFO: rc: 1
Sep 17 02:57:58.886: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:58:08.888: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:58:08.985: INFO: rc: 1
Sep 17 02:58:08.985: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:58:18.989: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:58:19.101: INFO: rc: 1
Sep 17 02:58:19.101: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:58:29.104: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:58:29.183: INFO: rc: 1
Sep 17 02:58:29.183: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:58:39.184: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:58:39.258: INFO: rc: 1
Sep 17 02:58:39.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:58:49.260: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:58:49.329: INFO: rc: 1
Sep 17 02:58:49.329: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:58:59.333: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:58:59.405: INFO: rc: 1
Sep 17 02:58:59.405: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:59:09.406: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:59:09.470: INFO: rc: 1
Sep 17 02:59:09.470: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:59:19.473: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:59:19.548: INFO: rc: 1
Sep 17 02:59:19.548: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:59:29.549: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:59:29.618: INFO: rc: 1
Sep 17 02:59:29.618: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:59:39.621: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:59:39.689: INFO: rc: 1
Sep 17 02:59:39.689: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:59:49.693: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:59:49.754: INFO: rc: 1
Sep 17 02:59:49.754: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 02:59:59.755: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 02:59:59.817: INFO: rc: 1
Sep 17 02:59:59.817: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:00:09.821: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:00:09.885: INFO: rc: 1
Sep 17 03:00:09.885: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:00:19.889: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:00:19.956: INFO: rc: 1
Sep 17 03:00:19.956: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:00:29.961: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:00:30.036: INFO: rc: 1
Sep 17 03:00:30.036: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:00:40.037: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:00:40.108: INFO: rc: 1
Sep 17 03:00:40.108: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:00:50.109: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:00:50.174: INFO: rc: 1
Sep 17 03:00:50.174: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:01:00.175: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:01:00.237: INFO: rc: 1
Sep 17 03:01:00.237: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:01:10.238: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:01:10.301: INFO: rc: 1
Sep 17 03:01:10.302: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:01:20.302: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:01:20.375: INFO: rc: 1
Sep 17 03:01:20.375: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:01:30.375: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:01:30.453: INFO: rc: 1
Sep 17 03:01:30.453: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:01:40.453: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:01:40.531: INFO: rc: 1
Sep 17 03:01:40.531: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:01:50.531: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:01:50.604: INFO: rc: 1
Sep 17 03:01:50.604: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:02:00.605: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:02:00.673: INFO: rc: 1
Sep 17 03:02:00.673: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:02:10.674: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:02:10.750: INFO: rc: 1
Sep 17 03:02:10.750: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
I0917 03:02:11.501782    2890 boskos.go:86] Sending heartbeat to Boskos
Sep 17 03:02:20.751: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:02:20.834: INFO: rc: 1
Sep 17 03:02:20.834: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:02:30.834: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:02:30.911: INFO: rc: 1
Sep 17 03:02:30.911: INFO: Waiting 10s to retry failed RunHostCmd: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Sep 17 03:02:40.912: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=statefulset-4890 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Sep 17 03:02:40.988: INFO: rc: 1
Sep 17 03:02:40.988: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: 
Sep 17 03:02:40.988: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
... skipping 13 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":40,"skipped":838,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:02:41.125: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abf2584b-4fc5-4436-a78d-1f13636fbe41" in namespace "projected-5253" to be "Succeeded or Failed"
Sep 17 03:02:41.130: INFO: Pod "downwardapi-volume-abf2584b-4fc5-4436-a78d-1f13636fbe41": Phase="Pending", Reason="", readiness=false. Elapsed: 5.382722ms
Sep 17 03:02:43.133: INFO: Pod "downwardapi-volume-abf2584b-4fc5-4436-a78d-1f13636fbe41": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008525078s
STEP: Saw pod success
Sep 17 03:02:43.133: INFO: Pod "downwardapi-volume-abf2584b-4fc5-4436-a78d-1f13636fbe41" satisfied condition "Succeeded or Failed"
Sep 17 03:02:43.135: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-abf2584b-4fc5-4436-a78d-1f13636fbe41 container client-container: <nil>
STEP: delete the pod
Sep 17 03:02:43.166: INFO: Waiting for pod downwardapi-volume-abf2584b-4fc5-4436-a78d-1f13636fbe41 to disappear
Sep 17 03:02:43.170: INFO: Pod downwardapi-volume-abf2584b-4fc5-4436-a78d-1f13636fbe41 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:02:43.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5253" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":41,"skipped":841,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:02:43.213: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2081a9fe-76e0-46fe-8a1a-dc3ef2abb3e7" in namespace "projected-1098" to be "Succeeded or Failed"
Sep 17 03:02:43.217: INFO: Pod "downwardapi-volume-2081a9fe-76e0-46fe-8a1a-dc3ef2abb3e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382462ms
Sep 17 03:02:45.221: INFO: Pod "downwardapi-volume-2081a9fe-76e0-46fe-8a1a-dc3ef2abb3e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007585487s
STEP: Saw pod success
Sep 17 03:02:45.221: INFO: Pod "downwardapi-volume-2081a9fe-76e0-46fe-8a1a-dc3ef2abb3e7" satisfied condition "Succeeded or Failed"
Sep 17 03:02:45.223: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-2081a9fe-76e0-46fe-8a1a-dc3ef2abb3e7 container client-container: <nil>
STEP: delete the pod
Sep 17 03:02:45.238: INFO: Waiting for pod downwardapi-volume-2081a9fe-76e0-46fe-8a1a-dc3ef2abb3e7 to disappear
Sep 17 03:02:45.242: INFO: Pod downwardapi-volume-2081a9fe-76e0-46fe-8a1a-dc3ef2abb3e7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:02:45.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1098" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":42,"skipped":841,"failed":0}
SSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 11 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:02:45.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-333" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":43,"skipped":845,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] EndpointSliceMirroring 
  should mirror a custom Endpoints resource through create update and delete [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSliceMirroring
... skipping 12 lines ...
STEP: mirroring deletion of a custom Endpoint
Sep 17 03:02:47.406: INFO: Waiting for 0 EndpointSlices to exist, got 1
[AfterEach] [sig-network] EndpointSliceMirroring
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:02:49.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-2433" for this suite.
•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":44,"skipped":855,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 03:02:49.419: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 03:02:49.459: INFO: Waiting up to 5m0s for pod "downward-api-9f4ed39b-f786-4ad8-a08f-99270b15401f" in namespace "downward-api-4349" to be "Succeeded or Failed"
Sep 17 03:02:49.464: INFO: Pod "downward-api-9f4ed39b-f786-4ad8-a08f-99270b15401f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.062368ms
Sep 17 03:02:51.468: INFO: Pod "downward-api-9f4ed39b-f786-4ad8-a08f-99270b15401f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009349828s
STEP: Saw pod success
Sep 17 03:02:51.469: INFO: Pod "downward-api-9f4ed39b-f786-4ad8-a08f-99270b15401f" satisfied condition "Succeeded or Failed"
Sep 17 03:02:51.471: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downward-api-9f4ed39b-f786-4ad8-a08f-99270b15401f container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:02:51.490: INFO: Waiting for pod downward-api-9f4ed39b-f786-4ad8-a08f-99270b15401f to disappear
Sep 17 03:02:51.494: INFO: Pod downward-api-9f4ed39b-f786-4ad8-a08f-99270b15401f no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:02:51.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4349" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":45,"skipped":875,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 51 lines ...
• [SLOW TEST:9.298 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":46,"skipped":904,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 7 lines ...
[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:03:02.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-4800" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":47,"skipped":921,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
Sep 17 03:03:07.014: INFO: The status of Pod pod-hostip-042c3092-c475-406e-b5d6-a1155f67d841 is Running (Ready = true)
Sep 17 03:03:07.020: INFO: Pod pod-hostip-042c3092-c475-406e-b5d6-a1155f67d841 has hostIP: 10.128.0.5
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:03:07.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3609" for this suite.
•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":48,"skipped":940,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":49,"skipped":950,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PreStop
... skipping 32 lines ...
• [SLOW TEST:9.103 seconds]
[sig-node] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":346,"completed":50,"skipped":963,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Events
... skipping 23 lines ...
• [SLOW TEST:6.113 seconds]
[sig-node] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":346,"completed":51,"skipped":973,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
Sep 17 03:03:30.428: INFO: The status of Pod pod-logs-websocket-06296968-da62-46a7-b32a-5a80c7006611 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 03:03:32.433: INFO: The status of Pod pod-logs-websocket-06296968-da62-46a7-b32a-5a80c7006611 is Running (Ready = true)
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:03:32.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1387" for this suite.
•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":52,"skipped":1007,"failed":0}
SSSS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] server version
... skipping 11 lines ...
Sep 17 03:03:32.501: INFO: cleanMinorVersion: 23
Sep 17 03:03:32.501: INFO: Minor version: 23+
[AfterEach] [sig-api-machinery] server version
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:03:32.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-9568" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":53,"skipped":1011,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should support CronJob API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 23 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:03:32.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-2114" for this suite.
•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":54,"skipped":1040,"failed":0}
SSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 40 lines ...
• [SLOW TEST:13.345 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":55,"skipped":1043,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:03:46.077: INFO: Waiting up to 5m0s for pod "downwardapi-volume-133ca59a-176b-42c5-8b55-1cd532e5697a" in namespace "projected-202" to be "Succeeded or Failed"
Sep 17 03:03:46.082: INFO: Pod "downwardapi-volume-133ca59a-176b-42c5-8b55-1cd532e5697a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.762256ms
Sep 17 03:03:48.086: INFO: Pod "downwardapi-volume-133ca59a-176b-42c5-8b55-1cd532e5697a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008197271s
STEP: Saw pod success
Sep 17 03:03:48.086: INFO: Pod "downwardapi-volume-133ca59a-176b-42c5-8b55-1cd532e5697a" satisfied condition "Succeeded or Failed"
Sep 17 03:03:48.088: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-133ca59a-176b-42c5-8b55-1cd532e5697a container client-container: <nil>
STEP: delete the pod
Sep 17 03:03:48.105: INFO: Waiting for pod downwardapi-volume-133ca59a-176b-42c5-8b55-1cd532e5697a to disappear
Sep 17 03:03:48.108: INFO: Pod downwardapi-volume-133ca59a-176b-42c5-8b55-1cd532e5697a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:03:48.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-202" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":56,"skipped":1067,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:22.123 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":57,"skipped":1102,"failed":0}
SSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 03:04:10.238: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 17 03:04:10.282: INFO: PodSpec: initContainers in spec.initContainers
Sep 17 03:04:52.777: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-18494382-fff0-4b98-920a-94c2a54c5d49", GenerateName:"", Namespace:"init-container-1542", SelfLink:"", UID:"ac327ab3-9450-451e-8863-190c2e9e07bc", ResourceVersion:"5847", Generation:0, CreationTimestamp:time.Date(2021, time.September, 17, 3, 4, 10, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"282879447"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 17, 3, 4, 10, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00450c048), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 17, 3, 4, 11, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00450c078), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-b4wbl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00451a020), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-b4wbl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-b4wbl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-b4wbl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004c90328), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kt2-4d7c9b85-175c-minion-group-94gp", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00088e000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004c903a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004c903c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004c903c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004c903cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003268050), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 3, 4, 10, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 3, 4, 10, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 3, 4, 10, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 3, 4, 10, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.128.0.5", PodIP:"10.64.3.41", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.3.41"}}, StartTime:time.Date(2021, time.September, 17, 3, 4, 10, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00088e230)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00088e2a0)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://604eabb8adfc140d8860e1d6a2b97d005374c15b20391f2ba392aa93e7ade81c", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00451a0c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00451a080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc004c9044f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:04:52.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1542" for this suite.

• [SLOW TEST:42.547 seconds]
[sig-node] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":58,"skipped":1105,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-9979412a-bfff-42be-a323-703e5adce9d7
STEP: Creating a pod to test consume configMaps
Sep 17 03:04:52.831: INFO: Waiting up to 5m0s for pod "pod-configmaps-9dc5ce41-17c7-4e53-86cd-a21a266c7b09" in namespace "configmap-9591" to be "Succeeded or Failed"
Sep 17 03:04:52.836: INFO: Pod "pod-configmaps-9dc5ce41-17c7-4e53-86cd-a21a266c7b09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.970918ms
Sep 17 03:04:54.841: INFO: Pod "pod-configmaps-9dc5ce41-17c7-4e53-86cd-a21a266c7b09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009463804s
STEP: Saw pod success
Sep 17 03:04:54.841: INFO: Pod "pod-configmaps-9dc5ce41-17c7-4e53-86cd-a21a266c7b09" satisfied condition "Succeeded or Failed"
Sep 17 03:04:54.843: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-configmaps-9dc5ce41-17c7-4e53-86cd-a21a266c7b09 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:04:54.872: INFO: Waiting for pod pod-configmaps-9dc5ce41-17c7-4e53-86cd-a21a266c7b09 to disappear
Sep 17 03:04:54.876: INFO: Pod pod-configmaps-9dc5ce41-17c7-4e53-86cd-a21a266c7b09 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:04:54.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9591" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":59,"skipped":1107,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 17 lines ...
• [SLOW TEST:15.058 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":60,"skipped":1119,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:154.404 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":61,"skipped":1146,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-8128e100-2cdb-4687-8535-8ac2370a2934
STEP: Creating a pod to test consume secrets
Sep 17 03:07:44.385: INFO: Waiting up to 5m0s for pod "pod-secrets-90fb8d2a-a01a-4fb2-ba33-c6f1e0186d4a" in namespace "secrets-501" to be "Succeeded or Failed"
Sep 17 03:07:44.391: INFO: Pod "pod-secrets-90fb8d2a-a01a-4fb2-ba33-c6f1e0186d4a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.26025ms
Sep 17 03:07:46.395: INFO: Pod "pod-secrets-90fb8d2a-a01a-4fb2-ba33-c6f1e0186d4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009067075s
STEP: Saw pod success
Sep 17 03:07:46.395: INFO: Pod "pod-secrets-90fb8d2a-a01a-4fb2-ba33-c6f1e0186d4a" satisfied condition "Succeeded or Failed"
Sep 17 03:07:46.398: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-secrets-90fb8d2a-a01a-4fb2-ba33-c6f1e0186d4a container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:07:46.431: INFO: Waiting for pod pod-secrets-90fb8d2a-a01a-4fb2-ba33-c6f1e0186d4a to disappear
Sep 17 03:07:46.435: INFO: Pod pod-secrets-90fb8d2a-a01a-4fb2-ba33-c6f1e0186d4a no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:07:46.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-501" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":62,"skipped":1176,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 15 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":346,"completed":63,"skipped":1186,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 03:07:58.120: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-6cc94305-4cd6-45d0-9cb6-35af5c8435c1
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:07:58.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3126" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":64,"skipped":1200,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:7.313 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":65,"skipped":1207,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-b5w4
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 03:08:05.767: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-b5w4" in namespace "subpath-1920" to be "Succeeded or Failed"
Sep 17 03:08:05.772: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.664775ms
Sep 17 03:08:07.776: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 2.008525785s
Sep 17 03:08:09.779: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011993019s
Sep 17 03:08:11.783: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 6.015918904s
Sep 17 03:08:13.787: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 8.019972811s
Sep 17 03:08:15.792: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 10.024811224s
Sep 17 03:08:17.797: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 12.029109085s
Sep 17 03:08:19.801: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 14.033214392s
Sep 17 03:08:21.806: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 16.038357718s
Sep 17 03:08:23.811: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 18.043134523s
Sep 17 03:08:25.815: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Running", Reason="", readiness=true. Elapsed: 20.047297872s
Sep 17 03:08:27.819: INFO: Pod "pod-subpath-test-downwardapi-b5w4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051143952s
STEP: Saw pod success
Sep 17 03:08:27.819: INFO: Pod "pod-subpath-test-downwardapi-b5w4" satisfied condition "Succeeded or Failed"
Sep 17 03:08:27.821: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-subpath-test-downwardapi-b5w4 container test-container-subpath-downwardapi-b5w4: <nil>
STEP: delete the pod
Sep 17 03:08:27.844: INFO: Waiting for pod pod-subpath-test-downwardapi-b5w4 to disappear
Sep 17 03:08:27.850: INFO: Pod pod-subpath-test-downwardapi-b5w4 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-b5w4
Sep 17 03:08:27.850: INFO: Deleting pod "pod-subpath-test-downwardapi-b5w4" in namespace "subpath-1920"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":66,"skipped":1217,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 23 lines ...
• [SLOW TEST:12.106 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":346,"completed":67,"skipped":1228,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:08:40.021: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50667a64-fc58-4275-82cc-19ad35c78d3a" in namespace "downward-api-9950" to be "Succeeded or Failed"
Sep 17 03:08:40.029: INFO: Pod "downwardapi-volume-50667a64-fc58-4275-82cc-19ad35c78d3a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.591811ms
Sep 17 03:08:42.032: INFO: Pod "downwardapi-volume-50667a64-fc58-4275-82cc-19ad35c78d3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011098956s
STEP: Saw pod success
Sep 17 03:08:42.032: INFO: Pod "downwardapi-volume-50667a64-fc58-4275-82cc-19ad35c78d3a" satisfied condition "Succeeded or Failed"
Sep 17 03:08:42.035: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-50667a64-fc58-4275-82cc-19ad35c78d3a container client-container: <nil>
STEP: delete the pod
Sep 17 03:08:42.051: INFO: Waiting for pod downwardapi-volume-50667a64-fc58-4275-82cc-19ad35c78d3a to disappear
Sep 17 03:08:42.054: INFO: Pod downwardapi-volume-50667a64-fc58-4275-82cc-19ad35c78d3a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:08:42.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9950" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":68,"skipped":1289,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should list, patch and delete a collection of StatefulSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 31 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should list, patch and delete a collection of StatefulSets [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":69,"skipped":1310,"failed":0}
[sig-node] Pods 
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...
STEP: verifying the updated pod is in kubernetes
Sep 17 03:09:04.831: INFO: Pod update OK
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:09:04.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3560" for this suite.
•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":70,"skipped":1310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 21 lines ...
• [SLOW TEST:10.077 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":346,"completed":71,"skipped":1334,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 19 lines ...
• [SLOW TEST:10.116 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":346,"completed":72,"skipped":1373,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 03:09:25.031: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 17 03:09:25.074: INFO: Waiting up to 5m0s for pod "pod-27533f78-ed8a-4304-b99d-6c5c986fe25d" in namespace "emptydir-9435" to be "Succeeded or Failed"
Sep 17 03:09:25.080: INFO: Pod "pod-27533f78-ed8a-4304-b99d-6c5c986fe25d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.832646ms
Sep 17 03:09:27.084: INFO: Pod "pod-27533f78-ed8a-4304-b99d-6c5c986fe25d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010411088s
STEP: Saw pod success
Sep 17 03:09:27.084: INFO: Pod "pod-27533f78-ed8a-4304-b99d-6c5c986fe25d" satisfied condition "Succeeded or Failed"
Sep 17 03:09:27.086: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-27533f78-ed8a-4304-b99d-6c5c986fe25d container test-container: <nil>
STEP: delete the pod
Sep 17 03:09:27.116: INFO: Waiting for pod pod-27533f78-ed8a-4304-b99d-6c5c986fe25d to disappear
Sep 17 03:09:27.122: INFO: Pod pod-27533f78-ed8a-4304-b99d-6c5c986fe25d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:09:27.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9435" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":73,"skipped":1404,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 69 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":346,"completed":74,"skipped":1415,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:09:36.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5633" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":75,"skipped":1454,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-x899
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 03:09:36.300: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x899" in namespace "subpath-4749" to be "Succeeded or Failed"
Sep 17 03:09:36.305: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Pending", Reason="", readiness=false. Elapsed: 4.3057ms
Sep 17 03:09:38.307: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 2.006852631s
Sep 17 03:09:40.311: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 4.010306871s
Sep 17 03:09:42.315: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 6.01408699s
Sep 17 03:09:44.319: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 8.018667228s
Sep 17 03:09:46.327: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 10.02672332s
Sep 17 03:09:48.331: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 12.030939065s
Sep 17 03:09:50.336: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 14.035086939s
Sep 17 03:09:52.339: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 16.038692041s
Sep 17 03:09:54.344: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 18.043138808s
Sep 17 03:09:56.348: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Running", Reason="", readiness=true. Elapsed: 20.047565801s
Sep 17 03:09:58.353: INFO: Pod "pod-subpath-test-configmap-x899": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.052087191s
STEP: Saw pod success
Sep 17 03:09:58.353: INFO: Pod "pod-subpath-test-configmap-x899" satisfied condition "Succeeded or Failed"
Sep 17 03:09:58.355: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-subpath-test-configmap-x899 container test-container-subpath-configmap-x899: <nil>
STEP: delete the pod
Sep 17 03:09:58.372: INFO: Waiting for pod pod-subpath-test-configmap-x899 to disappear
Sep 17 03:09:58.375: INFO: Pod pod-subpath-test-configmap-x899 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-x899
Sep 17 03:09:58.375: INFO: Deleting pod "pod-subpath-test-configmap-x899" in namespace "subpath-4749"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":76,"skipped":1507,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 18 lines ...
• [SLOW TEST:6.648 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":77,"skipped":1516,"failed":0}
SSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":78,"skipped":1519,"failed":0}
SSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
... skipping 27 lines ...
• [SLOW TEST:7.124 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":79,"skipped":1523,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-7f84a2fa-af32-4ded-ac8f-7b4afc36baa4
STEP: Creating a pod to test consume secrets
Sep 17 03:10:20.332: INFO: Waiting up to 5m0s for pod "pod-secrets-d8c721c0-0f57-4c5a-a629-54032b7f5405" in namespace "secrets-5264" to be "Succeeded or Failed"
Sep 17 03:10:20.365: INFO: Pod "pod-secrets-d8c721c0-0f57-4c5a-a629-54032b7f5405": Phase="Pending", Reason="", readiness=false. Elapsed: 32.778011ms
Sep 17 03:10:22.369: INFO: Pod "pod-secrets-d8c721c0-0f57-4c5a-a629-54032b7f5405": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.03708847s
STEP: Saw pod success
Sep 17 03:10:22.369: INFO: Pod "pod-secrets-d8c721c0-0f57-4c5a-a629-54032b7f5405" satisfied condition "Succeeded or Failed"
Sep 17 03:10:22.372: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-secrets-d8c721c0-0f57-4c5a-a629-54032b7f5405 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:10:22.387: INFO: Waiting for pod pod-secrets-d8c721c0-0f57-4c5a-a629-54032b7f5405 to disappear
Sep 17 03:10:22.390: INFO: Pod pod-secrets-d8c721c0-0f57-4c5a-a629-54032b7f5405 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:10:22.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5264" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":80,"skipped":1541,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 23 lines ...
• [SLOW TEST:13.114 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":81,"skipped":1547,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 46 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":82,"skipped":1558,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 14 lines ...
STEP: Creating configMap with name cm-test-opt-create-16615190-713a-4acc-b052-3aba3b0d4446
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:11:07.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3788" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":83,"skipped":1618,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 43 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1238
    should create services for rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":346,"completed":84,"skipped":1623,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 5 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:11:13.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5514" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":85,"skipped":1632,"failed":0}
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 36 lines ...
• [SLOW TEST:63.671 seconds]
[sig-storage] EmptyDir wrapper volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":86,"skipped":1638,"failed":0}
SSSSS
------------------------------
[sig-node] Pods 
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 30 lines ...
Sep 17 03:12:21.639: INFO: observed event type MODIFIED
Sep 17 03:12:21.645: INFO: observed event type MODIFIED
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:12:21.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7034" for this suite.
•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":87,"skipped":1643,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 03:12:21.663: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142
[It] should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep 17 03:12:21.727: INFO: DaemonSet pods can't tolerate node kt2-4d7c9b85-175c-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 03:12:21.744: INFO: Number of nodes with available pods: 0
Sep 17 03:12:21.745: INFO: Node kt2-4d7c9b85-175c-minion-group-94gp is running more than one daemon pod
... skipping 3 lines ...
Sep 17 03:12:23.748: INFO: DaemonSet pods can't tolerate node kt2-4d7c9b85-175c-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 03:12:23.751: INFO: Number of nodes with available pods: 2
Sep 17 03:12:23.751: INFO: Node kt2-4d7c9b85-175c-minion-group-b90v is running more than one daemon pod
Sep 17 03:12:24.748: INFO: DaemonSet pods can't tolerate node kt2-4d7c9b85-175c-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 03:12:24.751: INFO: Number of nodes with available pods: 3
Sep 17 03:12:24.751: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Sep 17 03:12:24.771: INFO: DaemonSet pods can't tolerate node kt2-4d7c9b85-175c-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 03:12:24.779: INFO: Number of nodes with available pods: 2
Sep 17 03:12:24.779: INFO: Node kt2-4d7c9b85-175c-minion-group-94gp is running more than one daemon pod
Sep 17 03:12:25.786: INFO: DaemonSet pods can't tolerate node kt2-4d7c9b85-175c-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 03:12:25.791: INFO: Number of nodes with available pods: 2
Sep 17 03:12:25.791: INFO: Node kt2-4d7c9b85-175c-minion-group-94gp is running more than one daemon pod
Sep 17 03:12:26.790: INFO: DaemonSet pods can't tolerate node kt2-4d7c9b85-175c-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 03:12:26.799: INFO: Number of nodes with available pods: 3
Sep 17 03:12:26.799: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-192, will wait for the garbage collector to delete the pods
Sep 17 03:12:26.900: INFO: Deleting DaemonSet.extensions daemon-set took: 28.88077ms
Sep 17 03:12:27.100: INFO: Terminating DaemonSet.extensions daemon-set pods took: 200.793928ms
... skipping 8 lines ...
Sep 17 03:12:29.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-192" for this suite.

• [SLOW TEST:8.070 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":88,"skipped":1668,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events API
... skipping 20 lines ...
STEP: listing events in all namespaces
STEP: listing events in test namespace
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:12:29.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6966" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":89,"skipped":1704,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:52.215 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":90,"skipped":1719,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should block an eviction until the PDB is updated to allow it [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 34 lines ...
• [SLOW TEST:8.235 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":91,"skipped":1787,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 27 lines ...
• [SLOW TEST:76.555 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":92,"skipped":1825,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should list and delete a collection of DaemonSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 27 lines ...
Sep 17 03:14:48.957: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"8372"},"items":[{"metadata":{"name":"daemon-set-66bjg","generateName":"daemon-set-","namespace":"daemonsets-8502","uid":"b7a916c9-031d-480f-bace-e7b1c07ceaa2","resourceVersion":"8366","creationTimestamp":"2021-09-17T03:14:46Z","labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"98fb9dca-c33e-452a-a2b4-0d5e47bef80a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-17T03:14:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98fb9dca-c33e-452a-a2b4-0d5e47bef80a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-17T03:14:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.78\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-tjvcx","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-tjvcx","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-4d7c9b85-175c-minion-group-94gp","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-4d7c9b85-175c-minion-group-94gp"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:46Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:48Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:48Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:46Z"}],"hostIP":"10.128.0.5","podIP":"10.64.3.78","podIPs":[{"ip":"10.64.3.78"}],"startTime":"2021-09-17T03:14:46Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-17T03:14:47Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://7a8b3a6154981f59862e83a838a4f7854206480a64cd60b4fae20384b753c6fa","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-df6gp","generateName":"daemon-set-","namespace":"daemonsets-8502","uid":"89d0c820-5c56-4558-9b34-f2bc49d6c85e","resourceVersion":"8368","creationTimestamp":"2021-09-17T03:14:46Z","labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"98fb9dca-c33e-452a-a2b4-0d5e47bef80a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-17T03:14:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98fb9dca-c33e-452a-a2b4-0d5e47bef80a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-17T03:14:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.22\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-ttwbc","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-ttwbc","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-4d7c9b85-175c-minion-group-n0sz","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-4d7c9b85-175c-minion-group-n0sz"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:46Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:48Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:48Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:46Z"}],"hostIP":"10.128.0.3","podIP":"10.64.2.22","podIPs":[{"ip":"10.64.2.22"}],"startTime":"2021-09-17T03:14:46Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-17T03:14:47Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://067f27564d220522f425eb9b828c50d474f86285eac8d7b2bb3c8b6fdfe62ef3","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-gxg5v","generateName":"daemon-set-","namespace":"daemonsets-8502","uid":"57127897-9129-46d8-920b-0088a98da41f","resourceVersion":"8370","creationTimestamp":"2021-09-17T03:14:46Z","labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"98fb9dca-c33e-452a-a2b4-0d5e47bef80a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-17T03:14:46Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"98fb9dca-c33e-452a-a2b4-0d5e47bef80a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-17T03:14:48Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.47\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-kjw4c","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-kjw4c","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-4d7c9b85-175c-minion-group-b90v","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-4d7c9b85-175c-minion-group-b90v"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:46Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:48Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:48Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T03:14:46Z"}],"hostIP":"10.128.0.4","podIP":"10.64.1.47","podIPs":[{"ip":"10.64.1.47"}],"startTime":"2021-09-17T03:14:46Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-17T03:14:47Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://dfe782634c5e94e9a88a5db3f0455ea64db70729910f64193fb235bbc817387d","started":true}],"qosClass":"BestEffort"}}]}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:14:48.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8502" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":93,"skipped":1841,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 17 03:14:48.992: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Sep 17 03:14:49.029: INFO: Waiting up to 5m0s for pod "client-containers-8fd54fa5-9d05-4f1a-b02f-35a40c2becb5" in namespace "containers-7977" to be "Succeeded or Failed"
Sep 17 03:14:49.034: INFO: Pod "client-containers-8fd54fa5-9d05-4f1a-b02f-35a40c2becb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.755786ms
Sep 17 03:14:51.038: INFO: Pod "client-containers-8fd54fa5-9d05-4f1a-b02f-35a40c2becb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00940909s
STEP: Saw pod success
Sep 17 03:14:51.038: INFO: Pod "client-containers-8fd54fa5-9d05-4f1a-b02f-35a40c2becb5" satisfied condition "Succeeded or Failed"
Sep 17 03:14:51.041: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod client-containers-8fd54fa5-9d05-4f1a-b02f-35a40c2becb5 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:14:51.180: INFO: Waiting for pod client-containers-8fd54fa5-9d05-4f1a-b02f-35a40c2becb5 to disappear
Sep 17 03:14:51.183: INFO: Pod client-containers-8fd54fa5-9d05-4f1a-b02f-35a40c2becb5 no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:14:51.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7977" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":94,"skipped":1866,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should verify changes to a daemon set status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 68 lines ...
• [SLOW TEST:6.477 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should verify changes to a daemon set status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":95,"skipped":1879,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:11.079 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":96,"skipped":1893,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 26 lines ...
• [SLOW TEST:88.675 seconds]
[sig-node] NoExecuteTaintManager Multiple Pods [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":97,"skipped":1916,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:16:41.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7959" for this suite.
STEP: Destroying namespace "webhook-7959-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":98,"skipped":1971,"failed":0}
S
------------------------------
[sig-node] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:16:43.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1145" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":99,"skipped":1972,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:16:43.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4701" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":100,"skipped":1986,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] HostPort 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] HostPort
... skipping 36 lines ...
• [SLOW TEST:18.255 seconds]
[sig-network] HostPort
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":101,"skipped":1998,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Sep 17 03:17:04.249: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 03:17:04.249: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:17:04.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5125" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":346,"completed":102,"skipped":2022,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 114 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":103,"skipped":2032,"failed":0}
SSSSSS
------------------------------
[sig-node] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 5 lines ...
[BeforeEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 03:17:56.882: INFO: The status of Pod server-envvars-d7b0f7df-1dc4-4d87-b211-e563ae2231b9 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 03:17:58.888: INFO: The status of Pod server-envvars-d7b0f7df-1dc4-4d87-b211-e563ae2231b9 is Running (Ready = true)
Sep 17 03:17:58.917: INFO: Waiting up to 5m0s for pod "client-envvars-b6925907-b64e-44d4-8a2f-c912ac405bfa" in namespace "pods-8461" to be "Succeeded or Failed"
Sep 17 03:17:58.926: INFO: Pod "client-envvars-b6925907-b64e-44d4-8a2f-c912ac405bfa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.752929ms
Sep 17 03:18:00.931: INFO: Pod "client-envvars-b6925907-b64e-44d4-8a2f-c912ac405bfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014501248s
STEP: Saw pod success
Sep 17 03:18:00.931: INFO: Pod "client-envvars-b6925907-b64e-44d4-8a2f-c912ac405bfa" satisfied condition "Succeeded or Failed"
Sep 17 03:18:00.937: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod client-envvars-b6925907-b64e-44d4-8a2f-c912ac405bfa container env3cont: <nil>
STEP: delete the pod
Sep 17 03:18:00.966: INFO: Waiting for pod client-envvars-b6925907-b64e-44d4-8a2f-c912ac405bfa to disappear
Sep 17 03:18:00.977: INFO: Pod client-envvars-b6925907-b64e-44d4-8a2f-c912ac405bfa no longer exists
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:00.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8461" for this suite.
•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":104,"skipped":2038,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:18:01.062: INFO: Waiting up to 5m0s for pod "downwardapi-volume-db72e8a5-4f9c-4cba-8d78-45e49a94cd89" in namespace "downward-api-6229" to be "Succeeded or Failed"
Sep 17 03:18:01.069: INFO: Pod "downwardapi-volume-db72e8a5-4f9c-4cba-8d78-45e49a94cd89": Phase="Pending", Reason="", readiness=false. Elapsed: 7.000053ms
Sep 17 03:18:03.073: INFO: Pod "downwardapi-volume-db72e8a5-4f9c-4cba-8d78-45e49a94cd89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010588514s
STEP: Saw pod success
Sep 17 03:18:03.073: INFO: Pod "downwardapi-volume-db72e8a5-4f9c-4cba-8d78-45e49a94cd89" satisfied condition "Succeeded or Failed"
Sep 17 03:18:03.076: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-db72e8a5-4f9c-4cba-8d78-45e49a94cd89 container client-container: <nil>
STEP: delete the pod
Sep 17 03:18:03.092: INFO: Waiting for pod downwardapi-volume-db72e8a5-4f9c-4cba-8d78-45e49a94cd89 to disappear
Sep 17 03:18:03.096: INFO: Pod downwardapi-volume-db72e8a5-4f9c-4cba-8d78-45e49a94cd89 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:03.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6229" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":105,"skipped":2048,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-a2800650-444e-48e9-907e-c85478b3bb24
STEP: Creating a pod to test consume configMaps
Sep 17 03:18:03.147: INFO: Waiting up to 5m0s for pod "pod-configmaps-2669f8cc-ede1-4902-879e-4da8b6e35c64" in namespace "configmap-4652" to be "Succeeded or Failed"
Sep 17 03:18:03.153: INFO: Pod "pod-configmaps-2669f8cc-ede1-4902-879e-4da8b6e35c64": Phase="Pending", Reason="", readiness=false. Elapsed: 5.527935ms
Sep 17 03:18:05.157: INFO: Pod "pod-configmaps-2669f8cc-ede1-4902-879e-4da8b6e35c64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009804864s
STEP: Saw pod success
Sep 17 03:18:05.157: INFO: Pod "pod-configmaps-2669f8cc-ede1-4902-879e-4da8b6e35c64" satisfied condition "Succeeded or Failed"
Sep 17 03:18:05.159: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-configmaps-2669f8cc-ede1-4902-879e-4da8b6e35c64 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:18:05.174: INFO: Waiting for pod pod-configmaps-2669f8cc-ede1-4902-879e-4da8b6e35c64 to disappear
Sep 17 03:18:05.177: INFO: Pod pod-configmaps-2669f8cc-ede1-4902-879e-4da8b6e35c64 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:05.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4652" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":106,"skipped":2067,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Sep 17 03:18:05.221: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:08.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2999" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":107,"skipped":2101,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 2 lines ...
Sep 17 03:18:08.682: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 17 03:18:10.815: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:10.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4258" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":108,"skipped":2131,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:11.119 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":109,"skipped":2134,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:18:22.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e18f0a4f-f164-47a9-ad28-42476c97c160" in namespace "downward-api-8664" to be "Succeeded or Failed"
Sep 17 03:18:22.012: INFO: Pod "downwardapi-volume-e18f0a4f-f164-47a9-ad28-42476c97c160": Phase="Pending", Reason="", readiness=false. Elapsed: 4.757564ms
Sep 17 03:18:24.016: INFO: Pod "downwardapi-volume-e18f0a4f-f164-47a9-ad28-42476c97c160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008938959s
STEP: Saw pod success
Sep 17 03:18:24.016: INFO: Pod "downwardapi-volume-e18f0a4f-f164-47a9-ad28-42476c97c160" satisfied condition "Succeeded or Failed"
Sep 17 03:18:24.018: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-e18f0a4f-f164-47a9-ad28-42476c97c160 container client-container: <nil>
STEP: delete the pod
Sep 17 03:18:24.036: INFO: Waiting for pod downwardapi-volume-e18f0a4f-f164-47a9-ad28-42476c97c160 to disappear
Sep 17 03:18:24.040: INFO: Pod downwardapi-volume-e18f0a4f-f164-47a9-ad28-42476c97c160 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:24.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8664" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":2157,"failed":0}
SS
------------------------------
[sig-apps] ReplicaSet 
  should validate Replicaset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 40 lines ...
• [SLOW TEST:5.102 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Replicaset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":111,"skipped":2159,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
• [SLOW TEST:7.519 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":112,"skipped":2165,"failed":0}
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] version v1
... skipping 344 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":346,"completed":113,"skipped":2168,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  Replace and Patch tests [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 24 lines ...
• [SLOW TEST:6.469 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":114,"skipped":2205,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 03:18:49.331: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 17 03:18:49.367: INFO: Waiting up to 5m0s for pod "pod-5b5e5627-99b5-4a70-8e23-16c6b77dea5f" in namespace "emptydir-8809" to be "Succeeded or Failed"
Sep 17 03:18:49.371: INFO: Pod "pod-5b5e5627-99b5-4a70-8e23-16c6b77dea5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.321256ms
Sep 17 03:18:51.374: INFO: Pod "pod-5b5e5627-99b5-4a70-8e23-16c6b77dea5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007439466s
STEP: Saw pod success
Sep 17 03:18:51.374: INFO: Pod "pod-5b5e5627-99b5-4a70-8e23-16c6b77dea5f" satisfied condition "Succeeded or Failed"
Sep 17 03:18:51.376: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-5b5e5627-99b5-4a70-8e23-16c6b77dea5f container test-container: <nil>
STEP: delete the pod
Sep 17 03:18:51.394: INFO: Waiting for pod pod-5b5e5627-99b5-4a70-8e23-16c6b77dea5f to disappear
Sep 17 03:18:51.398: INFO: Pod pod-5b5e5627-99b5-4a70-8e23-16c6b77dea5f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:51.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8809" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":115,"skipped":2238,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-1a1871f8-71f2-48af-9672-eb5541a43e00
STEP: Creating a pod to test consume secrets
Sep 17 03:18:51.485: INFO: Waiting up to 5m0s for pod "pod-secrets-5d78895b-a06e-42a3-a39c-77165b3d15ed" in namespace "secrets-5197" to be "Succeeded or Failed"
Sep 17 03:18:51.494: INFO: Pod "pod-secrets-5d78895b-a06e-42a3-a39c-77165b3d15ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.436556ms
Sep 17 03:18:53.498: INFO: Pod "pod-secrets-5d78895b-a06e-42a3-a39c-77165b3d15ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012115603s
STEP: Saw pod success
Sep 17 03:18:53.498: INFO: Pod "pod-secrets-5d78895b-a06e-42a3-a39c-77165b3d15ed" satisfied condition "Succeeded or Failed"
Sep 17 03:18:53.500: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-secrets-5d78895b-a06e-42a3-a39c-77165b3d15ed container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:18:53.515: INFO: Waiting for pod pod-secrets-5d78895b-a06e-42a3-a39c-77165b3d15ed to disappear
Sep 17 03:18:53.518: INFO: Pod pod-secrets-5d78895b-a06e-42a3-a39c-77165b3d15ed no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:53.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5197" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":116,"skipped":2266,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-7882ff4d-9480-49a2-a6a2-52b13698f0ef
STEP: Creating a pod to test consume configMaps
Sep 17 03:18:53.576: INFO: Waiting up to 5m0s for pod "pod-configmaps-9940b9dd-2f25-46e5-8ff4-4a865761b837" in namespace "configmap-4526" to be "Succeeded or Failed"
Sep 17 03:18:53.582: INFO: Pod "pod-configmaps-9940b9dd-2f25-46e5-8ff4-4a865761b837": Phase="Pending", Reason="", readiness=false. Elapsed: 5.739586ms
Sep 17 03:18:55.586: INFO: Pod "pod-configmaps-9940b9dd-2f25-46e5-8ff4-4a865761b837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009950269s
STEP: Saw pod success
Sep 17 03:18:55.586: INFO: Pod "pod-configmaps-9940b9dd-2f25-46e5-8ff4-4a865761b837" satisfied condition "Succeeded or Failed"
Sep 17 03:18:55.590: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-configmaps-9940b9dd-2f25-46e5-8ff4-4a865761b837 container configmap-volume-test: <nil>
STEP: delete the pod
Sep 17 03:18:55.619: INFO: Waiting for pod pod-configmaps-9940b9dd-2f25-46e5-8ff4-4a865761b837 to disappear
Sep 17 03:18:55.625: INFO: Pod pod-configmaps-9940b9dd-2f25-46e5-8ff4-4a865761b837 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:55.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4526" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":117,"skipped":2274,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 03:18:55.634: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 5 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 03:18:56.148: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 03:18:59.164: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:18:59.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1886" for this suite.
STEP: Destroying namespace "webhook-1886-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":118,"skipped":2294,"failed":0}
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 37 lines ...
STEP: Deleting pod pod1 in namespace services-8837
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8837 to expose endpoints map[pod2:[80]]
Sep 17 03:19:08.563: INFO: successfully validated that service endpoint-test2 in namespace services-8837 exposes endpoints map[pod2:[80]]
STEP: Checking if the Service forwards traffic to pod2
Sep 17 03:19:09.564: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-8837 exec execpodbqcx9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 17 03:19:11.717: INFO: rc: 1
Sep 17 03:19:11.717: INFO: Service reachability failing with error: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-8837 exec execpodbqcx9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 03:19:12.717: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-8837 exec execpodbqcx9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 17 03:19:12.951: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n"
Sep 17 03:19:12.951: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 17 03:19:12.951: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-8837 exec execpodbqcx9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.20.25 80'
... skipping 12 lines ...
• [SLOW TEST:13.911 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":346,"completed":119,"skipped":2295,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 18 lines ...
• [SLOW TEST:17.480 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":120,"skipped":2297,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-60465966-505d-4542-a663-73f2d4d1ac05
STEP: Creating a pod to test consume secrets
Sep 17 03:19:30.818: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b" in namespace "projected-8335" to be "Succeeded or Failed"
Sep 17 03:19:30.827: INFO: Pod "pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.096015ms
Sep 17 03:19:32.831: INFO: Pod "pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012841162s
Sep 17 03:19:34.835: INFO: Pod "pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016871442s
STEP: Saw pod success
Sep 17 03:19:34.835: INFO: Pod "pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b" satisfied condition "Succeeded or Failed"
Sep 17 03:19:34.838: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:19:34.856: INFO: Waiting for pod pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b to disappear
Sep 17 03:19:34.859: INFO: Pod pod-projected-secrets-2298b4e6-69d7-49c6-b83f-a77b2fa2a92b no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:19:34.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8335" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":121,"skipped":2328,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:19:34.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df8bbf46-4d65-4fbd-b5db-1e45e3ea3d09" in namespace "projected-9701" to be "Succeeded or Failed"
Sep 17 03:19:34.909: INFO: Pod "downwardapi-volume-df8bbf46-4d65-4fbd-b5db-1e45e3ea3d09": Phase="Pending", Reason="", readiness=false. Elapsed: 3.17798ms
Sep 17 03:19:36.913: INFO: Pod "downwardapi-volume-df8bbf46-4d65-4fbd-b5db-1e45e3ea3d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006701023s
STEP: Saw pod success
Sep 17 03:19:36.913: INFO: Pod "downwardapi-volume-df8bbf46-4d65-4fbd-b5db-1e45e3ea3d09" satisfied condition "Succeeded or Failed"
Sep 17 03:19:36.916: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-df8bbf46-4d65-4fbd-b5db-1e45e3ea3d09 container client-container: <nil>
STEP: delete the pod
Sep 17 03:19:36.931: INFO: Waiting for pod downwardapi-volume-df8bbf46-4d65-4fbd-b5db-1e45e3ea3d09 to disappear
Sep 17 03:19:36.934: INFO: Pod downwardapi-volume-df8bbf46-4d65-4fbd-b5db-1e45e3ea3d09 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:19:36.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9701" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":122,"skipped":2365,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 15 lines ...
• [SLOW TEST:5.152 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":123,"skipped":2382,"failed":0}
SSSSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 17 03:19:42.094: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Sep 17 03:19:42.144: INFO: Waiting up to 5m0s for pod "client-containers-39033482-b488-4f83-a6aa-76081d4dd4bd" in namespace "containers-7869" to be "Succeeded or Failed"
Sep 17 03:19:42.153: INFO: Pod "client-containers-39033482-b488-4f83-a6aa-76081d4dd4bd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.009447ms
Sep 17 03:19:44.157: INFO: Pod "client-containers-39033482-b488-4f83-a6aa-76081d4dd4bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012678525s
STEP: Saw pod success
Sep 17 03:19:44.157: INFO: Pod "client-containers-39033482-b488-4f83-a6aa-76081d4dd4bd" satisfied condition "Succeeded or Failed"
Sep 17 03:19:44.159: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod client-containers-39033482-b488-4f83-a6aa-76081d4dd4bd container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:19:44.173: INFO: Waiting for pod client-containers-39033482-b488-4f83-a6aa-76081d4dd4bd to disappear
Sep 17 03:19:44.176: INFO: Pod client-containers-39033482-b488-4f83-a6aa-76081d4dd4bd no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:19:44.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7869" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":124,"skipped":2388,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 19 lines ...
• [SLOW TEST:316.090 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":125,"skipped":2421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should have Endpoints and EndpointSlices pointing to API Server [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 9 lines ...
Sep 17 03:25:00.362: INFO: Endpoints addresses: [34.69.105.80] , ports: [443]
Sep 17 03:25:00.362: INFO: EndpointSlices addresses: [34.69.105.80] , ports: [443]
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:25:00.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-7300" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":126,"skipped":2466,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:5.937 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":127,"skipped":2501,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-5e4dae93-ef0b-4cf0-a5d7-6d113adf2ca4
STEP: Creating a pod to test consume secrets
Sep 17 03:25:06.392: INFO: Waiting up to 5m0s for pod "pod-secrets-29822e72-df98-4c13-b4c0-3b048fc7d365" in namespace "secrets-9900" to be "Succeeded or Failed"
Sep 17 03:25:06.415: INFO: Pod "pod-secrets-29822e72-df98-4c13-b4c0-3b048fc7d365": Phase="Pending", Reason="", readiness=false. Elapsed: 23.125428ms
Sep 17 03:25:08.419: INFO: Pod "pod-secrets-29822e72-df98-4c13-b4c0-3b048fc7d365": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026907739s
STEP: Saw pod success
Sep 17 03:25:08.419: INFO: Pod "pod-secrets-29822e72-df98-4c13-b4c0-3b048fc7d365" satisfied condition "Succeeded or Failed"
Sep 17 03:25:08.427: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-secrets-29822e72-df98-4c13-b4c0-3b048fc7d365 container secret-env-test: <nil>
STEP: delete the pod
Sep 17 03:25:08.464: INFO: Waiting for pod pod-secrets-29822e72-df98-4c13-b4c0-3b048fc7d365 to disappear
Sep 17 03:25:08.467: INFO: Pod pod-secrets-29822e72-df98-4c13-b4c0-3b048fc7d365 no longer exists
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:25:08.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9900" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":128,"skipped":2535,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-83c04be3-80ab-4d3b-a1bc-f9ef52fd2b2d
STEP: Creating a pod to test consume configMaps
Sep 17 03:25:08.516: INFO: Waiting up to 5m0s for pod "pod-configmaps-c321133d-fcd0-482b-bae8-76e64c19d521" in namespace "configmap-9175" to be "Succeeded or Failed"
Sep 17 03:25:08.520: INFO: Pod "pod-configmaps-c321133d-fcd0-482b-bae8-76e64c19d521": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194889ms
Sep 17 03:25:10.524: INFO: Pod "pod-configmaps-c321133d-fcd0-482b-bae8-76e64c19d521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007795855s
STEP: Saw pod success
Sep 17 03:25:10.524: INFO: Pod "pod-configmaps-c321133d-fcd0-482b-bae8-76e64c19d521" satisfied condition "Succeeded or Failed"
Sep 17 03:25:10.528: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-configmaps-c321133d-fcd0-482b-bae8-76e64c19d521 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:25:10.551: INFO: Waiting for pod pod-configmaps-c321133d-fcd0-482b-bae8-76e64c19d521 to disappear
Sep 17 03:25:10.555: INFO: Pod pod-configmaps-c321133d-fcd0-482b-bae8-76e64c19d521 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:25:10.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9175" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":129,"skipped":2550,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 37 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1560
    should update a single-container pod's image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":346,"completed":130,"skipped":2560,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 17 03:25:19.910: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:25:19.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8069" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":131,"skipped":2578,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a volume subpath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 17 03:25:19.932: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Sep 17 03:25:19.977: INFO: Waiting up to 5m0s for pod "var-expansion-2005fea5-6a31-4c65-8d01-b809caf1db66" in namespace "var-expansion-2232" to be "Succeeded or Failed"
Sep 17 03:25:19.980: INFO: Pod "var-expansion-2005fea5-6a31-4c65-8d01-b809caf1db66": Phase="Pending", Reason="", readiness=false. Elapsed: 3.715189ms
Sep 17 03:25:21.984: INFO: Pod "var-expansion-2005fea5-6a31-4c65-8d01-b809caf1db66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007909813s
STEP: Saw pod success
Sep 17 03:25:21.985: INFO: Pod "var-expansion-2005fea5-6a31-4c65-8d01-b809caf1db66" satisfied condition "Succeeded or Failed"
Sep 17 03:25:21.987: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod var-expansion-2005fea5-6a31-4c65-8d01-b809caf1db66 container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:25:22.001: INFO: Waiting for pod var-expansion-2005fea5-6a31-4c65-8d01-b809caf1db66 to disappear
Sep 17 03:25:22.005: INFO: Pod var-expansion-2005fea5-6a31-4c65-8d01-b809caf1db66 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:25:22.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2232" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":132,"skipped":2628,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 95 lines ...
• [SLOW TEST:40.102 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":133,"skipped":2657,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 23 lines ...
• [SLOW TEST:13.135 seconds]
[sig-api-machinery] Namespaces [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":134,"skipped":2724,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 17 03:26:15.251: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Sep 17 03:26:15.291: INFO: Waiting up to 5m0s for pod "var-expansion-a750d27c-cef6-4615-8cd4-50ba82518c29" in namespace "var-expansion-9168" to be "Succeeded or Failed"
Sep 17 03:26:15.295: INFO: Pod "var-expansion-a750d27c-cef6-4615-8cd4-50ba82518c29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210461ms
Sep 17 03:26:17.298: INFO: Pod "var-expansion-a750d27c-cef6-4615-8cd4-50ba82518c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007168724s
STEP: Saw pod success
Sep 17 03:26:17.298: INFO: Pod "var-expansion-a750d27c-cef6-4615-8cd4-50ba82518c29" satisfied condition "Succeeded or Failed"
Sep 17 03:26:17.300: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod var-expansion-a750d27c-cef6-4615-8cd4-50ba82518c29 container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:26:17.317: INFO: Waiting for pod var-expansion-a750d27c-cef6-4615-8cd4-50ba82518c29 to disappear
Sep 17 03:26:17.321: INFO: Pod var-expansion-a750d27c-cef6-4615-8cd4-50ba82518c29 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:26:17.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9168" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":135,"skipped":2763,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Sep 17 03:26:17.812: INFO: stderr: ""
Sep 17 03:26:17.812: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:26:17.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5496" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":136,"skipped":2795,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 03:26:17.847: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 17 03:26:17.899: INFO: Waiting up to 5m0s for pod "pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97" in namespace "emptydir-4382" to be "Succeeded or Failed"
Sep 17 03:26:17.904: INFO: Pod "pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.638939ms
Sep 17 03:26:19.908: INFO: Pod "pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008464708s
Sep 17 03:26:21.911: INFO: Pod "pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012380389s
STEP: Saw pod success
Sep 17 03:26:21.911: INFO: Pod "pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97" satisfied condition "Succeeded or Failed"
Sep 17 03:26:21.914: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97 container test-container: <nil>
STEP: delete the pod
Sep 17 03:26:21.929: INFO: Waiting for pod pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97 to disappear
Sep 17 03:26:21.933: INFO: Pod pod-7fc1bf2f-4e03-405a-9f1a-270bddc00d97 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:26:21.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4382" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":137,"skipped":2826,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 59 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform rolling updates and roll backs of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":138,"skipped":2862,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
• [SLOW TEST:7.310 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":139,"skipped":2864,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-425
STEP: Waiting until pod test-pod will start running in namespace statefulset-425
STEP: Creating statefulset with conflicting port in namespace statefulset-425
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-425
Sep 17 03:28:12.585: INFO: Observed stateful pod in namespace: statefulset-425, name: ss-0, uid: 9cbc16da-aff3-442f-8c6d-ad0ae5418d3f, status phase: Pending. Waiting for statefulset controller to delete.
Sep 17 03:28:12.614: INFO: Observed stateful pod in namespace: statefulset-425, name: ss-0, uid: 9cbc16da-aff3-442f-8c6d-ad0ae5418d3f, status phase: Failed. Waiting for statefulset controller to delete.
Sep 17 03:28:12.657: INFO: Observed stateful pod in namespace: statefulset-425, name: ss-0, uid: 9cbc16da-aff3-442f-8c6d-ad0ae5418d3f, status phase: Failed. Waiting for statefulset controller to delete.
Sep 17 03:28:12.666: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-425
STEP: Removing pod with conflicting port in namespace statefulset-425
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-425 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Sep 17 03:28:14.752: INFO: Deleting all statefulset in ns statefulset-425
... skipping 10 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":140,"skipped":2957,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-8d5e3d1d-a132-4986-8dac-ae97b6baf34a
STEP: Creating a pod to test consume secrets
Sep 17 03:28:24.866: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-19942c6c-f76d-4c78-a719-0304d87916f2" in namespace "projected-1422" to be "Succeeded or Failed"
Sep 17 03:28:24.872: INFO: Pod "pod-projected-secrets-19942c6c-f76d-4c78-a719-0304d87916f2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064603ms
Sep 17 03:28:26.876: INFO: Pod "pod-projected-secrets-19942c6c-f76d-4c78-a719-0304d87916f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010619687s
STEP: Saw pod success
Sep 17 03:28:26.876: INFO: Pod "pod-projected-secrets-19942c6c-f76d-4c78-a719-0304d87916f2" satisfied condition "Succeeded or Failed"
Sep 17 03:28:26.879: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-secrets-19942c6c-f76d-4c78-a719-0304d87916f2 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:28:26.912: INFO: Waiting for pod pod-projected-secrets-19942c6c-f76d-4c78-a719-0304d87916f2 to disappear
Sep 17 03:28:26.916: INFO: Pod pod-projected-secrets-19942c6c-f76d-4c78-a719-0304d87916f2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:28:26.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1422" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":141,"skipped":2962,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 33 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":142,"skipped":2982,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-ee5591eb-23c7-4e0f-b972-08a477ea8884
STEP: Creating a pod to test consume secrets
Sep 17 03:28:47.184: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb43783a-0854-41d2-8c8b-e47e73b9ac65" in namespace "projected-482" to be "Succeeded or Failed"
Sep 17 03:28:47.189: INFO: Pod "pod-projected-secrets-bb43783a-0854-41d2-8c8b-e47e73b9ac65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.904115ms
Sep 17 03:28:49.193: INFO: Pod "pod-projected-secrets-bb43783a-0854-41d2-8c8b-e47e73b9ac65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00948355s
STEP: Saw pod success
Sep 17 03:28:49.193: INFO: Pod "pod-projected-secrets-bb43783a-0854-41d2-8c8b-e47e73b9ac65" satisfied condition "Succeeded or Failed"
Sep 17 03:28:49.196: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-secrets-bb43783a-0854-41d2-8c8b-e47e73b9ac65 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:28:49.215: INFO: Waiting for pod pod-projected-secrets-bb43783a-0854-41d2-8c8b-e47e73b9ac65 to disappear
Sep 17 03:28:49.219: INFO: Pod pod-projected-secrets-bb43783a-0854-41d2-8c8b-e47e73b9ac65 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:28:49.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-482" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":143,"skipped":3005,"failed":0}
S
------------------------------
[sig-node] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 14 lines ...
• [SLOW TEST:60.056 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":144,"skipped":3006,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should test the lifecycle of a ReplicationController [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 26 lines ...
STEP: deleting ReplicationControllers by collection
STEP: waiting for ReplicationController to have a DELETED watchEvent
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:29:52.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5004" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":145,"skipped":3025,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-1b12ca89-290d-4af9-b495-49421861ee8c
STEP: Creating a pod to test consume secrets
Sep 17 03:29:52.760: INFO: Waiting up to 5m0s for pod "pod-secrets-244e893a-bb7e-459c-a463-b43eaca64014" in namespace "secrets-3255" to be "Succeeded or Failed"
Sep 17 03:29:52.767: INFO: Pod "pod-secrets-244e893a-bb7e-459c-a463-b43eaca64014": Phase="Pending", Reason="", readiness=false. Elapsed: 6.661182ms
Sep 17 03:29:54.771: INFO: Pod "pod-secrets-244e893a-bb7e-459c-a463-b43eaca64014": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010312413s
STEP: Saw pod success
Sep 17 03:29:54.771: INFO: Pod "pod-secrets-244e893a-bb7e-459c-a463-b43eaca64014" satisfied condition "Succeeded or Failed"
Sep 17 03:29:54.773: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-secrets-244e893a-bb7e-459c-a463-b43eaca64014 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:29:54.792: INFO: Waiting for pod pod-secrets-244e893a-bb7e-459c-a463-b43eaca64014 to disappear
Sep 17 03:29:54.795: INFO: Pod pod-secrets-244e893a-bb7e-459c-a463-b43eaca64014 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:29:54.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3255" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":146,"skipped":3041,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 85 lines ...
• [SLOW TEST:304.163 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":147,"skipped":3050,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should test the lifecycle of an Endpoint [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:34:59.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2538" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":148,"skipped":3054,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 03:34:59.051: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 17 03:34:59.100: INFO: Waiting up to 5m0s for pod "pod-bfd1dee7-eccf-4157-8aca-9c58e92dceea" in namespace "emptydir-5286" to be "Succeeded or Failed"
Sep 17 03:34:59.110: INFO: Pod "pod-bfd1dee7-eccf-4157-8aca-9c58e92dceea": Phase="Pending", Reason="", readiness=false. Elapsed: 9.435572ms
Sep 17 03:35:01.113: INFO: Pod "pod-bfd1dee7-eccf-4157-8aca-9c58e92dceea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013031853s
STEP: Saw pod success
Sep 17 03:35:01.114: INFO: Pod "pod-bfd1dee7-eccf-4157-8aca-9c58e92dceea" satisfied condition "Succeeded or Failed"
Sep 17 03:35:01.117: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-bfd1dee7-eccf-4157-8aca-9c58e92dceea container test-container: <nil>
STEP: delete the pod
Sep 17 03:35:01.157: INFO: Waiting for pod pod-bfd1dee7-eccf-4157-8aca-9c58e92dceea to disappear
Sep 17 03:35:01.160: INFO: Pod pod-bfd1dee7-eccf-4157-8aca-9c58e92dceea no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:35:01.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5286" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":149,"skipped":3065,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 28 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":150,"skipped":3088,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:35:11.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7479" for this suite.
STEP: Destroying namespace "webhook-7479-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":151,"skipped":3088,"failed":0}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 03:35:11.194: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:35:14.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8868" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":346,"completed":152,"skipped":3088,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-40014b16-3091-4dd8-ab70-aac1e0d549cb
STEP: Creating a pod to test consume configMaps
Sep 17 03:35:14.515: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23b9ddb1-8bc1-44e0-ab5d-3168f864aabf" in namespace "projected-3605" to be "Succeeded or Failed"
Sep 17 03:35:14.520: INFO: Pod "pod-projected-configmaps-23b9ddb1-8bc1-44e0-ab5d-3168f864aabf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417163ms
Sep 17 03:35:16.531: INFO: Pod "pod-projected-configmaps-23b9ddb1-8bc1-44e0-ab5d-3168f864aabf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015293871s
STEP: Saw pod success
Sep 17 03:35:16.531: INFO: Pod "pod-projected-configmaps-23b9ddb1-8bc1-44e0-ab5d-3168f864aabf" satisfied condition "Succeeded or Failed"
Sep 17 03:35:16.541: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-configmaps-23b9ddb1-8bc1-44e0-ab5d-3168f864aabf container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep 17 03:35:16.600: INFO: Waiting for pod pod-projected-configmaps-23b9ddb1-8bc1-44e0-ab5d-3168f864aabf to disappear
Sep 17 03:35:16.606: INFO: Pod pod-projected-configmaps-23b9ddb1-8bc1-44e0-ab5d-3168f864aabf no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:35:16.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3605" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":153,"skipped":3108,"failed":0}

------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:35:16.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5579" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":154,"skipped":3108,"failed":0}
S
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 17 03:35:16.780: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 17 03:35:16.859: INFO: Waiting up to 5m0s for pod "client-containers-5b455d8a-e0a0-4386-963d-a1da33764b53" in namespace "containers-3682" to be "Succeeded or Failed"
Sep 17 03:35:16.868: INFO: Pod "client-containers-5b455d8a-e0a0-4386-963d-a1da33764b53": Phase="Pending", Reason="", readiness=false. Elapsed: 8.909136ms
Sep 17 03:35:18.872: INFO: Pod "client-containers-5b455d8a-e0a0-4386-963d-a1da33764b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012434451s
STEP: Saw pod success
Sep 17 03:35:18.872: INFO: Pod "client-containers-5b455d8a-e0a0-4386-963d-a1da33764b53" satisfied condition "Succeeded or Failed"
Sep 17 03:35:18.874: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod client-containers-5b455d8a-e0a0-4386-963d-a1da33764b53 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:35:18.893: INFO: Waiting for pod client-containers-5b455d8a-e0a0-4386-963d-a1da33764b53 to disappear
Sep 17 03:35:18.895: INFO: Pod client-containers-5b455d8a-e0a0-4386-963d-a1da33764b53 no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:35:18.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3682" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":155,"skipped":3109,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
• [SLOW TEST:16.139 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":156,"skipped":3114,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 47 lines ...
• [SLOW TEST:10.653 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":157,"skipped":3126,"failed":0}
SSSS
------------------------------
[sig-apps] CronJob 
  should schedule multiple jobs concurrently [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 16 lines ...
• [SLOW TEST:76.099 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":158,"skipped":3130,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:37:01.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-9119" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":159,"skipped":3154,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 10 lines ...
Sep 17 03:37:03.906: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Sep 17 03:37:03.993: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:37:03.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3734" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":160,"skipped":3174,"failed":0}
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 111 lines ...
• [SLOW TEST:8.211 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":161,"skipped":3180,"failed":0}
SSS
------------------------------
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Discovery
... skipping 104 lines ...
Sep 17 03:37:12.988: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}]
Sep 17 03:37:12.988: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1
[AfterEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:37:12.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-4471" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":162,"skipped":3183,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 03:37:12.995: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 17 03:37:13.040: INFO: Waiting up to 5m0s for pod "pod-50147185-f483-4d58-b7b8-a36c2fce8a37" in namespace "emptydir-7534" to be "Succeeded or Failed"
Sep 17 03:37:13.044: INFO: Pod "pod-50147185-f483-4d58-b7b8-a36c2fce8a37": Phase="Pending", Reason="", readiness=false. Elapsed: 3.883125ms
Sep 17 03:37:15.047: INFO: Pod "pod-50147185-f483-4d58-b7b8-a36c2fce8a37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007207207s
Sep 17 03:37:17.052: INFO: Pod "pod-50147185-f483-4d58-b7b8-a36c2fce8a37": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011597505s
Sep 17 03:37:19.056: INFO: Pod "pod-50147185-f483-4d58-b7b8-a36c2fce8a37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015423612s
STEP: Saw pod success
Sep 17 03:37:19.056: INFO: Pod "pod-50147185-f483-4d58-b7b8-a36c2fce8a37" satisfied condition "Succeeded or Failed"
Sep 17 03:37:19.058: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-50147185-f483-4d58-b7b8-a36c2fce8a37 container test-container: <nil>
STEP: delete the pod
Sep 17 03:37:19.108: INFO: Waiting for pod pod-50147185-f483-4d58-b7b8-a36c2fce8a37 to disappear
Sep 17 03:37:19.111: INFO: Pod pod-50147185-f483-4d58-b7b8-a36c2fce8a37 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 3 lines ...
• [SLOW TEST:6.122 seconds]
[sig-storage] EmptyDir volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":163,"skipped":3185,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 160 lines ...
Sep 17 03:37:20.237: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-3429 create -f -'
Sep 17 03:37:20.438: INFO: stderr: ""
Sep 17 03:37:20.438: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Sep 17 03:37:20.438: INFO: Waiting for all frontend pods to be Running.
Sep 17 03:37:25.489: INFO: Waiting for frontend to serve content.
Sep 17 03:37:26.546: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Sep 17 03:37:31.558: INFO: Trying to add a new entry to the guestbook.
Sep 17 03:37:31.567: INFO: Verifying that added entry can be retrieved.
Sep 17 03:37:31.590: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Sep 17 03:37:36.602: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-3429 delete --grace-period=0 --force -f -'
Sep 17 03:37:36.688: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 17 03:37:36.688: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Sep 17 03:37:36.688: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-3429 delete --grace-period=0 --force -f -'
... skipping 25 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":346,"completed":164,"skipped":3191,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-50296938-aa0d-4aa1-a0b7-52bd32c5536c
STEP: Creating a pod to test consume configMaps
Sep 17 03:37:37.162: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46d3896a-9225-4eed-be8f-7046d55b6659" in namespace "projected-3449" to be "Succeeded or Failed"
Sep 17 03:37:37.166: INFO: Pod "pod-projected-configmaps-46d3896a-9225-4eed-be8f-7046d55b6659": Phase="Pending", Reason="", readiness=false. Elapsed: 4.308354ms
Sep 17 03:37:39.170: INFO: Pod "pod-projected-configmaps-46d3896a-9225-4eed-be8f-7046d55b6659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008273752s
STEP: Saw pod success
Sep 17 03:37:39.170: INFO: Pod "pod-projected-configmaps-46d3896a-9225-4eed-be8f-7046d55b6659" satisfied condition "Succeeded or Failed"
Sep 17 03:37:39.173: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-configmaps-46d3896a-9225-4eed-be8f-7046d55b6659 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:37:39.190: INFO: Waiting for pod pod-projected-configmaps-46d3896a-9225-4eed-be8f-7046d55b6659 to disappear
Sep 17 03:37:39.193: INFO: Pod pod-projected-configmaps-46d3896a-9225-4eed-be8f-7046d55b6659 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:37:39.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3449" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":165,"skipped":3195,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:37:39.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1" in namespace "projected-5729" to be "Succeeded or Failed"
Sep 17 03:37:39.253: INFO: Pod "downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.375926ms
Sep 17 03:37:41.258: INFO: Pod "downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008984764s
Sep 17 03:37:43.263: INFO: Pod "downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014114709s
STEP: Saw pod success
Sep 17 03:37:43.263: INFO: Pod "downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1" satisfied condition "Succeeded or Failed"
Sep 17 03:37:43.266: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1 container client-container: <nil>
STEP: delete the pod
Sep 17 03:37:43.281: INFO: Waiting for pod downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1 to disappear
Sep 17 03:37:43.284: INFO: Pod downwardapi-volume-0e50165f-7a59-46bc-b01f-05cf681741a1 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:37:43.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5729" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":166,"skipped":3220,"failed":0}

------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 48 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":167,"skipped":3220,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 18 lines ...
• [SLOW TEST:6.734 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":168,"skipped":3223,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 03:39:00.528: INFO: Waiting up to 5m0s for pod "busybox-user-65534-37cf0a4d-3a79-46ea-9aaa-5e34e1f19727" in namespace "security-context-test-136" to be "Succeeded or Failed"
Sep 17 03:39:00.533: INFO: Pod "busybox-user-65534-37cf0a4d-3a79-46ea-9aaa-5e34e1f19727": Phase="Pending", Reason="", readiness=false. Elapsed: 5.229858ms
Sep 17 03:39:02.536: INFO: Pod "busybox-user-65534-37cf0a4d-3a79-46ea-9aaa-5e34e1f19727": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008557834s
Sep 17 03:39:02.536: INFO: Pod "busybox-user-65534-37cf0a4d-3a79-46ea-9aaa-5e34e1f19727" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:02.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-136" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":169,"skipped":3242,"failed":0}
SSSSS
------------------------------
[sig-network] IngressClass API 
   should support creating IngressClass API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] IngressClass API
... skipping 21 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:02.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-1235" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":346,"completed":170,"skipped":3247,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:04.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1959" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":171,"skipped":3278,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:39:04.845: INFO: Waiting up to 5m0s for pod "downwardapi-volume-efb5adbf-8301-4730-a0e8-3aa18a4da58a" in namespace "downward-api-1783" to be "Succeeded or Failed"
Sep 17 03:39:04.849: INFO: Pod "downwardapi-volume-efb5adbf-8301-4730-a0e8-3aa18a4da58a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.535474ms
Sep 17 03:39:06.853: INFO: Pod "downwardapi-volume-efb5adbf-8301-4730-a0e8-3aa18a4da58a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008430014s
STEP: Saw pod success
Sep 17 03:39:06.853: INFO: Pod "downwardapi-volume-efb5adbf-8301-4730-a0e8-3aa18a4da58a" satisfied condition "Succeeded or Failed"
Sep 17 03:39:06.856: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-efb5adbf-8301-4730-a0e8-3aa18a4da58a container client-container: <nil>
STEP: delete the pod
Sep 17 03:39:06.881: INFO: Waiting for pod downwardapi-volume-efb5adbf-8301-4730-a0e8-3aa18a4da58a to disappear
Sep 17 03:39:06.885: INFO: Pod downwardapi-volume-efb5adbf-8301-4730-a0e8-3aa18a4da58a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:06.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1783" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":172,"skipped":3301,"failed":0}
SSS
------------------------------
[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces 
  should list and delete a collection of PodDisruptionBudgets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 24 lines ...
Sep 17 03:39:07.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2-2938" for this suite.
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-4966" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":173,"skipped":3304,"failed":0}
SS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 13 lines ...
Sep 17 03:39:10.153: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:10.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-2488" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":174,"skipped":3306,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 17 03:39:10.202: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Sep 17 03:39:10.249: INFO: Waiting up to 5m0s for pod "var-expansion-7b2821db-7a08-4b5d-85e5-b89f1dedc356" in namespace "var-expansion-1740" to be "Succeeded or Failed"
Sep 17 03:39:10.254: INFO: Pod "var-expansion-7b2821db-7a08-4b5d-85e5-b89f1dedc356": Phase="Pending", Reason="", readiness=false. Elapsed: 5.099804ms
Sep 17 03:39:12.262: INFO: Pod "var-expansion-7b2821db-7a08-4b5d-85e5-b89f1dedc356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012787636s
STEP: Saw pod success
Sep 17 03:39:12.262: INFO: Pod "var-expansion-7b2821db-7a08-4b5d-85e5-b89f1dedc356" satisfied condition "Succeeded or Failed"
Sep 17 03:39:12.268: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod var-expansion-7b2821db-7a08-4b5d-85e5-b89f1dedc356 container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:39:12.355: INFO: Waiting for pod var-expansion-7b2821db-7a08-4b5d-85e5-b89f1dedc356 to disappear
Sep 17 03:39:12.360: INFO: Pod var-expansion-7b2821db-7a08-4b5d-85e5-b89f1dedc356 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:12.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1740" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":175,"skipped":3316,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Sep 17 03:39:12.483: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1711  28584e63-1224-492b-8b19-b239641482c2 14634 0 2021-09-17 03:39:12 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-09-17 03:39:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Sep 17 03:39:12.484: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-1711  28584e63-1224-492b-8b19-b239641482c2 14635 0 2021-09-17 03:39:12 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-09-17 03:39:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:39:12.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1711" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":176,"skipped":3336,"failed":0}
SSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 03:39:12.502: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod with failed condition
STEP: updating the pod
Sep 17 03:41:13.103: INFO: Successfully updated pod "var-expansion-98539ba3-3614-402c-b7b9-921ac2105e55"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Sep 17 03:41:15.111: INFO: Deleting pod "var-expansion-98539ba3-3614-402c-b7b9-921ac2105e55" in namespace "var-expansion-7185"
Sep 17 03:41:15.116: INFO: Wait up to 5m0s for pod "var-expansion-98539ba3-3614-402c-b7b9-921ac2105e55" to be fully deleted
... skipping 5 lines ...
• [SLOW TEST:154.631 seconds]
[sig-node] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":177,"skipped":3343,"failed":0}
SSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 49 lines ...
• [SLOW TEST:9.544 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":178,"skipped":3349,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:41:56.715: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69991f30-879e-43bb-847c-920da302a266" in namespace "projected-38" to be "Succeeded or Failed"
Sep 17 03:41:56.720: INFO: Pod "downwardapi-volume-69991f30-879e-43bb-847c-920da302a266": Phase="Pending", Reason="", readiness=false. Elapsed: 4.906434ms
Sep 17 03:41:58.724: INFO: Pod "downwardapi-volume-69991f30-879e-43bb-847c-920da302a266": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008921397s
STEP: Saw pod success
Sep 17 03:41:58.724: INFO: Pod "downwardapi-volume-69991f30-879e-43bb-847c-920da302a266" satisfied condition "Succeeded or Failed"
Sep 17 03:41:58.726: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-69991f30-879e-43bb-847c-920da302a266 container client-container: <nil>
STEP: delete the pod
Sep 17 03:41:58.768: INFO: Waiting for pod downwardapi-volume-69991f30-879e-43bb-847c-920da302a266 to disappear
Sep 17 03:41:58.771: INFO: Pod downwardapi-volume-69991f30-879e-43bb-847c-920da302a266 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:41:58.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-38" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":179,"skipped":3377,"failed":0}

------------------------------
[sig-apps] ReplicaSet 
  should list and delete a collection of ReplicaSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 21 lines ...
• [SLOW TEST:5.105 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":180,"skipped":3377,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should patch a secret [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:42:03.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5491" for this suite.
•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":181,"skipped":3423,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 10 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with one valid and two invalid sysctls
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:42:04.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-4820" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":182,"skipped":3459,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should succeed in writing subpaths in container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 27 lines ...
• [SLOW TEST:36.921 seconds]
[sig-node] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should succeed in writing subpaths in container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":183,"skipped":3470,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 53 lines ...
• [SLOW TEST:9.861 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":184,"skipped":3504,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 41 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PreemptionExecutionPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451
    runs ReplicaSets to verify preemption running path [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":185,"skipped":3524,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should delete a collection of pod templates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PodTemplates
... skipping 14 lines ...
STEP: check that the list of pod templates matches the requested quantity
Sep 17 03:44:18.270: INFO: requesting list of pod templates to confirm quantity
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:44:18.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-8013" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":186,"skipped":3537,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 11 lines ...
Sep 17 03:44:20.333: INFO: The status of Pod annotationupdated9c25c0f-c16d-40e9-a549-b535df3eb837 is Running (Ready = true)
Sep 17 03:44:20.879: INFO: Successfully updated pod "annotationupdated9c25c0f-c16d-40e9-a549-b535df3eb837"
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:44:22.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9388" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":187,"skipped":3576,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:242.834 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":188,"skipped":3589,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
Sep 17 03:48:25.852: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5042  c5032b00-e1fe-41c0-a0c6-06bb7c2c0a3d 16183 0 2021-09-17 03:48:25 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-09-17 03:48:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Sep 17 03:48:25.852: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5042  c5032b00-e1fe-41c0-a0c6-06bb7c2c0a3d 16184 0 2021-09-17 03:48:25 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-09-17 03:48:25 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:48:25.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5042" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":189,"skipped":3611,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 03:48:25.861: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 17 03:48:25.916: INFO: Waiting up to 5m0s for pod "pod-a7cfcd60-2a36-4075-92a3-6c4eaed8e22f" in namespace "emptydir-4908" to be "Succeeded or Failed"
Sep 17 03:48:25.923: INFO: Pod "pod-a7cfcd60-2a36-4075-92a3-6c4eaed8e22f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.288809ms
Sep 17 03:48:27.927: INFO: Pod "pod-a7cfcd60-2a36-4075-92a3-6c4eaed8e22f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01153814s
STEP: Saw pod success
Sep 17 03:48:27.927: INFO: Pod "pod-a7cfcd60-2a36-4075-92a3-6c4eaed8e22f" satisfied condition "Succeeded or Failed"
Sep 17 03:48:27.930: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-a7cfcd60-2a36-4075-92a3-6c4eaed8e22f container test-container: <nil>
STEP: delete the pod
Sep 17 03:48:27.960: INFO: Waiting for pod pod-a7cfcd60-2a36-4075-92a3-6c4eaed8e22f to disappear
Sep 17 03:48:27.967: INFO: Pod pod-a7cfcd60-2a36-4075-92a3-6c4eaed8e22f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:48:27.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4908" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":190,"skipped":3624,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 03:48:28.016: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44377e17-df83-4519-baa8-a76b5af9b149" in namespace "projected-7397" to be "Succeeded or Failed"
Sep 17 03:48:28.020: INFO: Pod "downwardapi-volume-44377e17-df83-4519-baa8-a76b5af9b149": Phase="Pending", Reason="", readiness=false. Elapsed: 3.922646ms
Sep 17 03:48:30.023: INFO: Pod "downwardapi-volume-44377e17-df83-4519-baa8-a76b5af9b149": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00703684s
STEP: Saw pod success
Sep 17 03:48:30.023: INFO: Pod "downwardapi-volume-44377e17-df83-4519-baa8-a76b5af9b149" satisfied condition "Succeeded or Failed"
Sep 17 03:48:30.026: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-44377e17-df83-4519-baa8-a76b5af9b149 container client-container: <nil>
STEP: delete the pod
Sep 17 03:48:30.044: INFO: Waiting for pod downwardapi-volume-44377e17-df83-4519-baa8-a76b5af9b149 to disappear
Sep 17 03:48:30.047: INFO: Pod downwardapi-volume-44377e17-df83-4519-baa8-a76b5af9b149 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:48:30.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7397" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":191,"skipped":3770,"failed":0}

------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:48:34.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1346" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":192,"skipped":3770,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
STEP: creating replication controller affinity-clusterip-timeout in namespace services-4528
I0917 03:48:36.548786   97125 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-4528, replica count: 3
I0917 03:48:39.599979   97125 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 17 03:48:39.605: INFO: Creating new exec pod
Sep 17 03:48:42.618: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4528 exec execpod-affinityb8z4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Sep 17 03:48:43.865: INFO: rc: 1
Sep 17 03:48:43.865: INFO: Service reachability failing with error: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4528 exec execpod-affinityb8z4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 03:48:44.865: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4528 exec execpod-affinityb8z4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Sep 17 03:48:46.038: INFO: rc: 1
Sep 17 03:48:46.038: INFO: Service reachability failing with error: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4528 exec execpod-affinityb8z4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 03:48:46.866: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4528 exec execpod-affinityb8z4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Sep 17 03:48:47.050: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n"
Sep 17 03:48:47.050: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 17 03:48:47.050: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4528 exec execpod-affinityb8z4b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.252.118 80'
... skipping 41 lines ...
• [SLOW TEST:56.734 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":193,"skipped":3781,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398
    should be able to retrieve and filter logs  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":346,"completed":194,"skipped":3782,"failed":0}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should guarantee kube-root-ca.crt exist in any namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: waiting for the root ca configmap reconciled
Sep 17 03:49:38.428: INFO: Reconciled root ca configmap in namespace "svcaccounts-8636"
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:38.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8636" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":195,"skipped":3790,"failed":0}
SSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":196,"skipped":3796,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 5 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:46.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4976" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":197,"skipped":3817,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Sep 17 03:49:47.293: INFO: created pod pod-service-account-nomountsa-nomountspec
Sep 17 03:49:47.293: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:47.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6935" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":346,"completed":198,"skipped":3843,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-f9532790-5eac-41bb-a0f3-d666caed113f
STEP: Creating a pod to test consume configMaps
Sep 17 03:49:47.369: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-23543cdb-2abb-4d1e-98e7-61b23339cf43" in namespace "projected-6950" to be "Succeeded or Failed"
Sep 17 03:49:47.375: INFO: Pod "pod-projected-configmaps-23543cdb-2abb-4d1e-98e7-61b23339cf43": Phase="Pending", Reason="", readiness=false. Elapsed: 5.625407ms
Sep 17 03:49:49.382: INFO: Pod "pod-projected-configmaps-23543cdb-2abb-4d1e-98e7-61b23339cf43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012891677s
STEP: Saw pod success
Sep 17 03:49:49.382: INFO: Pod "pod-projected-configmaps-23543cdb-2abb-4d1e-98e7-61b23339cf43" satisfied condition "Succeeded or Failed"
Sep 17 03:49:49.385: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-n0sz pod pod-projected-configmaps-23543cdb-2abb-4d1e-98e7-61b23339cf43 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:49:49.424: INFO: Waiting for pod pod-projected-configmaps-23543cdb-2abb-4d1e-98e7-61b23339cf43 to disappear
Sep 17 03:49:49.428: INFO: Pod pod-projected-configmaps-23543cdb-2abb-4d1e-98e7-61b23339cf43 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:49.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6950" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":199,"skipped":3861,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Lease 
  lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:49.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-1005" for this suite.
•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":200,"skipped":3877,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Sep 17 03:49:51.843: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Sep 17 03:49:51.996: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:52.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8254" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":201,"skipped":3887,"failed":0}
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 10 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:54.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7539" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":202,"skipped":3888,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:49:57.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7371" for this suite.
STEP: Destroying namespace "webhook-7371-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":203,"skipped":3897,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 15 lines ...
• [SLOW TEST:15.028 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":204,"skipped":3905,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 03:50:12.913: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 03:50:12.955: INFO: Waiting up to 5m0s for pod "downward-api-85a537f6-3e95-4ccd-afd8-91efd59031bf" in namespace "downward-api-9820" to be "Succeeded or Failed"
Sep 17 03:50:12.961: INFO: Pod "downward-api-85a537f6-3e95-4ccd-afd8-91efd59031bf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.787715ms
Sep 17 03:50:14.964: INFO: Pod "downward-api-85a537f6-3e95-4ccd-afd8-91efd59031bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009537029s
STEP: Saw pod success
Sep 17 03:50:14.964: INFO: Pod "downward-api-85a537f6-3e95-4ccd-afd8-91efd59031bf" satisfied condition "Succeeded or Failed"
Sep 17 03:50:14.966: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downward-api-85a537f6-3e95-4ccd-afd8-91efd59031bf container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:50:14.983: INFO: Waiting for pod downward-api-85a537f6-3e95-4ccd-afd8-91efd59031bf to disappear
Sep 17 03:50:14.986: INFO: Pod downward-api-85a537f6-3e95-4ccd-afd8-91efd59031bf no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:50:14.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9820" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":205,"skipped":3930,"failed":0}
SSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 17 03:50:15.036: INFO: The status of Pod busybox-host-aliasescee69ac8-c373-4d14-ac83-1c2f5d406f7f is Pending, waiting for it to be Running (with Ready = true)
Sep 17 03:50:17.040: INFO: The status of Pod busybox-host-aliasescee69ac8-c373-4d14-ac83-1c2f5d406f7f is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:50:17.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5786" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":206,"skipped":3935,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should update/patch PodDisruptionBudget status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 15 lines ...
STEP: Patching PodDisruptionBudget status
STEP: Waiting for the pdb to be processed
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:50:19.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-7842" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":207,"skipped":3977,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 03:50:19.185: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 03:50:19.226: INFO: Waiting up to 5m0s for pod "downward-api-9915dcd9-6fe1-40c0-90df-cfdfa0e58676" in namespace "downward-api-1798" to be "Succeeded or Failed"
Sep 17 03:50:19.232: INFO: Pod "downward-api-9915dcd9-6fe1-40c0-90df-cfdfa0e58676": Phase="Pending", Reason="", readiness=false. Elapsed: 6.015179ms
Sep 17 03:50:21.239: INFO: Pod "downward-api-9915dcd9-6fe1-40c0-90df-cfdfa0e58676": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012484156s
STEP: Saw pod success
Sep 17 03:50:21.239: INFO: Pod "downward-api-9915dcd9-6fe1-40c0-90df-cfdfa0e58676" satisfied condition "Succeeded or Failed"
Sep 17 03:50:21.242: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod downward-api-9915dcd9-6fe1-40c0-90df-cfdfa0e58676 container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:50:21.435: INFO: Waiting for pod downward-api-9915dcd9-6fe1-40c0-90df-cfdfa0e58676 to disappear
Sep 17 03:50:21.441: INFO: Pod downward-api-9915dcd9-6fe1-40c0-90df-cfdfa0e58676 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:50:21.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1798" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":208,"skipped":3994,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:50:22.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5697" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":209,"skipped":4021,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-xnz2
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 03:50:23.014: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xnz2" in namespace "subpath-4866" to be "Succeeded or Failed"
Sep 17 03:50:23.019: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.81621ms
Sep 17 03:50:25.022: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 2.008344574s
Sep 17 03:50:27.027: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 4.012627013s
Sep 17 03:50:29.032: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 6.018171713s
Sep 17 03:50:31.035: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 8.020796007s
Sep 17 03:50:33.039: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 10.024669597s
Sep 17 03:50:35.042: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 12.027489313s
Sep 17 03:50:37.046: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 14.031803449s
Sep 17 03:50:39.049: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 16.035210054s
Sep 17 03:50:41.053: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 18.039123998s
Sep 17 03:50:43.057: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Running", Reason="", readiness=true. Elapsed: 20.043000375s
Sep 17 03:50:45.060: INFO: Pod "pod-subpath-test-projected-xnz2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.045520985s
STEP: Saw pod success
Sep 17 03:50:45.060: INFO: Pod "pod-subpath-test-projected-xnz2" satisfied condition "Succeeded or Failed"
Sep 17 03:50:45.062: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-subpath-test-projected-xnz2 container test-container-subpath-projected-xnz2: <nil>
STEP: delete the pod
Sep 17 03:50:45.160: INFO: Waiting for pod pod-subpath-test-projected-xnz2 to disappear
Sep 17 03:50:45.164: INFO: Pod pod-subpath-test-projected-xnz2 no longer exists
STEP: Deleting pod pod-subpath-test-projected-xnz2
Sep 17 03:50:45.164: INFO: Deleting pod "pod-subpath-test-projected-xnz2" in namespace "subpath-4866"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":210,"skipped":4022,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 34 lines ...
• [SLOW TEST:9.309 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":211,"skipped":4033,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount projected service account token [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 2 lines ...
Sep 17 03:50:54.482: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep 17 03:50:54.549: INFO: Waiting up to 5m0s for pod "test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814" in namespace "svcaccounts-3802" to be "Succeeded or Failed"
Sep 17 03:50:54.555: INFO: Pod "test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814": Phase="Pending", Reason="", readiness=false. Elapsed: 5.724842ms
Sep 17 03:50:56.560: INFO: Pod "test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01053762s
Sep 17 03:50:58.564: INFO: Pod "test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014213437s
STEP: Saw pod success
Sep 17 03:50:58.564: INFO: Pod "test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814" satisfied condition "Succeeded or Failed"
Sep 17 03:50:58.566: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:50:58.587: INFO: Waiting for pod test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814 to disappear
Sep 17 03:50:58.591: INFO: Pod test-pod-eb068a2c-1bdc-48a1-86e2-de259d4e1814 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:50:58.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3802" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":212,"skipped":4049,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 51 lines ...
• [SLOW TEST:40.580 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":213,"skipped":4138,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 03:51:39.216: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-6f5e65f8-d2b9-40de-9634-faf4be91559c" in namespace "security-context-test-3961" to be "Succeeded or Failed"
Sep 17 03:51:39.224: INFO: Pod "busybox-readonly-false-6f5e65f8-d2b9-40de-9634-faf4be91559c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.30533ms
Sep 17 03:51:41.227: INFO: Pod "busybox-readonly-false-6f5e65f8-d2b9-40de-9634-faf4be91559c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011098658s
Sep 17 03:51:41.227: INFO: Pod "busybox-readonly-false-6f5e65f8-d2b9-40de-9634-faf4be91559c" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:51:41.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3961" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":214,"skipped":4152,"failed":0}
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 32 lines ...
• [SLOW TEST:5.218 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":215,"skipped":4158,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Sep 17 03:51:49.338: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl exec --namespace=svcaccounts-3186 pod-service-account-22008dfc-d04a-4943-a34a-9205dd25022e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:51:49.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3186" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":346,"completed":216,"skipped":4171,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:8.526 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":217,"skipped":4196,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 03:51:58.026: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 03:51:58.063: INFO: Waiting up to 5m0s for pod "downward-api-61feead3-5a06-4f99-861a-738bd7837cbe" in namespace "downward-api-6347" to be "Succeeded or Failed"
Sep 17 03:51:58.067: INFO: Pod "downward-api-61feead3-5a06-4f99-861a-738bd7837cbe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.39424ms
Sep 17 03:52:00.071: INFO: Pod "downward-api-61feead3-5a06-4f99-861a-738bd7837cbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008450836s
STEP: Saw pod success
Sep 17 03:52:00.071: INFO: Pod "downward-api-61feead3-5a06-4f99-861a-738bd7837cbe" satisfied condition "Succeeded or Failed"
Sep 17 03:52:00.074: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downward-api-61feead3-5a06-4f99-861a-738bd7837cbe container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:52:00.098: INFO: Waiting for pod downward-api-61feead3-5a06-4f99-861a-738bd7837cbe to disappear
Sep 17 03:52:00.105: INFO: Pod downward-api-61feead3-5a06-4f99-861a-738bd7837cbe no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:52:00.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6347" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":218,"skipped":4197,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:52:00.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-8445" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":219,"skipped":4211,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 22 lines ...
• [SLOW TEST:6.157 seconds]
[sig-api-machinery] Namespaces [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":220,"skipped":4263,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Sep 17 03:52:09.053: INFO: stderr: ""
Sep 17 03:52:09.053: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:52:09.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9874" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":346,"completed":221,"skipped":4283,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should complete a service status lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 42 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:52:09.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5877" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":222,"skipped":4322,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 03:52:09.224: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-913f0f46-429b-4ae1-9a89-ae9fa0234890" in namespace "security-context-test-3551" to be "Succeeded or Failed"
Sep 17 03:52:09.228: INFO: Pod "alpine-nnp-false-913f0f46-429b-4ae1-9a89-ae9fa0234890": Phase="Pending", Reason="", readiness=false. Elapsed: 3.869359ms
Sep 17 03:52:11.231: INFO: Pod "alpine-nnp-false-913f0f46-429b-4ae1-9a89-ae9fa0234890": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006492405s
Sep 17 03:52:11.231: INFO: Pod "alpine-nnp-false-913f0f46-429b-4ae1-9a89-ae9fa0234890" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:52:11.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3551" for this suite.
•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":223,"skipped":4338,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 41 lines ...
• [SLOW TEST:6.310 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":224,"skipped":4346,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 17 03:52:21.685: INFO: File wheezy_udp@dns-test-service-3.dns-9717.svc.cluster.local from pod  dns-9717/dns-test-20251d09-781f-4b6f-a658-804f13451a82 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 03:52:21.690: INFO: File jessie_udp@dns-test-service-3.dns-9717.svc.cluster.local from pod  dns-9717/dns-test-20251d09-781f-4b6f-a658-804f13451a82 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 03:52:21.690: INFO: Lookups using dns-9717/dns-test-20251d09-781f-4b6f-a658-804f13451a82 failed for: [wheezy_udp@dns-test-service-3.dns-9717.svc.cluster.local jessie_udp@dns-test-service-3.dns-9717.svc.cluster.local]

Sep 17 03:52:26.696: INFO: File wheezy_udp@dns-test-service-3.dns-9717.svc.cluster.local from pod  dns-9717/dns-test-20251d09-781f-4b6f-a658-804f13451a82 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 17 03:52:26.700: INFO: Lookups using dns-9717/dns-test-20251d09-781f-4b6f-a658-804f13451a82 failed for: [wheezy_udp@dns-test-service-3.dns-9717.svc.cluster.local]

Sep 17 03:52:31.701: INFO: DNS probes using dns-test-20251d09-781f-4b6f-a658-804f13451a82 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9717.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9717.svc.cluster.local; sleep 1; done
... skipping 16 lines ...
• [SLOW TEST:16.344 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":225,"skipped":4353,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 03:52:33.898: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 17 03:52:34.027: INFO: Waiting up to 5m0s for pod "pod-30015c5e-84f7-4b46-9d14-b56ed13f1545" in namespace "emptydir-1990" to be "Succeeded or Failed"
Sep 17 03:52:34.031: INFO: Pod "pod-30015c5e-84f7-4b46-9d14-b56ed13f1545": Phase="Pending", Reason="", readiness=false. Elapsed: 3.720569ms
Sep 17 03:52:36.035: INFO: Pod "pod-30015c5e-84f7-4b46-9d14-b56ed13f1545": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008227414s
STEP: Saw pod success
Sep 17 03:52:36.035: INFO: Pod "pod-30015c5e-84f7-4b46-9d14-b56ed13f1545" satisfied condition "Succeeded or Failed"
Sep 17 03:52:36.038: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-30015c5e-84f7-4b46-9d14-b56ed13f1545 container test-container: <nil>
STEP: delete the pod
Sep 17 03:52:36.055: INFO: Waiting for pod pod-30015c5e-84f7-4b46-9d14-b56ed13f1545 to disappear
Sep 17 03:52:36.059: INFO: Pod pod-30015c5e-84f7-4b46-9d14-b56ed13f1545 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:52:36.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1990" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":226,"skipped":4353,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Sep 17 03:52:38.134: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:52:38.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9499" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":227,"skipped":4368,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
• [SLOW TEST:35.307 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":228,"skipped":4381,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should create a PodDisruptionBudget [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 14 lines ...
STEP: Waiting for the pdb to be processed
STEP: Waiting for the pdb to be deleted
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:53:13.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-178" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":229,"skipped":4399,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
• [SLOW TEST:8.309 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":230,"skipped":4402,"failed":0}
SSSSSSSSS
------------------------------
[sig-auth] Certificates API [Privileged:ClusterAdmin] 
  should support CSR API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:53:23.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-1467" for this suite.
•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":231,"skipped":4411,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-bdfec3da-afde-4ee0-8c35-de846766d308
STEP: Creating a pod to test consume secrets
Sep 17 03:53:23.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-37abffce-8494-4951-95ef-527c8199e3dc" in namespace "projected-6058" to be "Succeeded or Failed"
Sep 17 03:53:23.517: INFO: Pod "pod-projected-secrets-37abffce-8494-4951-95ef-527c8199e3dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.423755ms
Sep 17 03:53:25.524: INFO: Pod "pod-projected-secrets-37abffce-8494-4951-95ef-527c8199e3dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00878623s
STEP: Saw pod success
Sep 17 03:53:25.524: INFO: Pod "pod-projected-secrets-37abffce-8494-4951-95ef-527c8199e3dc" satisfied condition "Succeeded or Failed"
Sep 17 03:53:25.527: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-secrets-37abffce-8494-4951-95ef-527c8199e3dc container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 03:53:25.551: INFO: Waiting for pod pod-projected-secrets-37abffce-8494-4951-95ef-527c8199e3dc to disappear
Sep 17 03:53:25.556: INFO: Pod pod-projected-secrets-37abffce-8494-4951-95ef-527c8199e3dc no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:53:25.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6058" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":232,"skipped":4427,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 17 lines ...
• [SLOW TEST:26.397 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":233,"skipped":4434,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Sep 17 03:53:52.080: INFO: stderr: ""
Sep 17 03:53:52.080: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncloud.google.com/v1\ncloud.google.com/v1beta1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nmetrics.k8s.io/v1beta1\nnetworking.gke.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscalingpolicy.kope.io/v1alpha1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:53:52.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4774" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":346,"completed":234,"skipped":4448,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 68 lines ...
• [SLOW TEST:12.514 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":235,"skipped":4467,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 03:54:04.634: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:54:06.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9698" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":346,"completed":236,"skipped":4475,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 11 lines ...
Sep 17 03:54:08.866: INFO: The status of Pod labelsupdate18448c1e-0e40-4598-850b-6905a1af0dda is Running (Ready = true)
Sep 17 03:54:09.388: INFO: Successfully updated pod "labelsupdate18448c1e-0e40-4598-850b-6905a1af0dda"
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:54:11.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-708" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":237,"skipped":4490,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 46 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":238,"skipped":4510,"failed":0}
S
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Sep 17 03:54:48.305: INFO: Unable to read jessie_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:48.308: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:48.313: INFO: Unable to read jessie_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:48.378: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:48.381: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:48.385: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:48.443: INFO: Lookups using dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1587 wheezy_tcp@dns-test-service.dns-1587 wheezy_udp@dns-test-service.dns-1587.svc wheezy_tcp@dns-test-service.dns-1587.svc wheezy_udp@_http._tcp.dns-test-service.dns-1587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1587 jessie_tcp@dns-test-service.dns-1587 jessie_udp@dns-test-service.dns-1587.svc jessie_tcp@dns-test-service.dns-1587.svc jessie_udp@_http._tcp.dns-test-service.dns-1587.svc jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc]

Sep 17 03:54:53.448: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.452: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.455: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.458: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.461: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
... skipping 5 lines ...
Sep 17 03:54:53.505: INFO: Unable to read jessie_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.513: INFO: Unable to read jessie_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.517: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.525: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.529: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:53.587: INFO: Lookups using dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1587 wheezy_tcp@dns-test-service.dns-1587 wheezy_udp@dns-test-service.dns-1587.svc wheezy_tcp@dns-test-service.dns-1587.svc wheezy_udp@_http._tcp.dns-test-service.dns-1587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1587 jessie_tcp@dns-test-service.dns-1587 jessie_udp@dns-test-service.dns-1587.svc jessie_tcp@dns-test-service.dns-1587.svc jessie_udp@_http._tcp.dns-test-service.dns-1587.svc jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc]

Sep 17 03:54:58.450: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.455: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.458: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.462: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.466: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
... skipping 5 lines ...
Sep 17 03:54:58.507: INFO: Unable to read jessie_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.515: INFO: Unable to read jessie_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.587: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.595: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.601: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:54:58.676: INFO: Lookups using dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1587 wheezy_tcp@dns-test-service.dns-1587 wheezy_udp@dns-test-service.dns-1587.svc wheezy_tcp@dns-test-service.dns-1587.svc wheezy_udp@_http._tcp.dns-test-service.dns-1587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1587 jessie_tcp@dns-test-service.dns-1587 jessie_udp@dns-test-service.dns-1587.svc jessie_tcp@dns-test-service.dns-1587.svc jessie_udp@_http._tcp.dns-test-service.dns-1587.svc jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc]

Sep 17 03:55:03.451: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.455: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.459: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.463: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.468: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
... skipping 5 lines ...
Sep 17 03:55:03.503: INFO: Unable to read jessie_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.507: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.510: INFO: Unable to read jessie_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.513: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.516: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.520: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:03.590: INFO: Lookups using dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1587 wheezy_tcp@dns-test-service.dns-1587 wheezy_udp@dns-test-service.dns-1587.svc wheezy_tcp@dns-test-service.dns-1587.svc wheezy_udp@_http._tcp.dns-test-service.dns-1587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1587 jessie_tcp@dns-test-service.dns-1587 jessie_udp@dns-test-service.dns-1587.svc jessie_tcp@dns-test-service.dns-1587.svc jessie_udp@_http._tcp.dns-test-service.dns-1587.svc jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc]

Sep 17 03:55:08.450: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.454: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.457: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.462: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.466: INFO: Unable to read wheezy_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
... skipping 5 lines ...
Sep 17 03:55:08.503: INFO: Unable to read jessie_udp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.506: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587 from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.510: INFO: Unable to read jessie_udp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.514: INFO: Unable to read jessie_tcp@dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.517: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.676: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc from pod dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc: the server could not find the requested resource (get pods dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc)
Sep 17 03:55:08.692: INFO: Lookups using dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1587 wheezy_tcp@dns-test-service.dns-1587 wheezy_udp@dns-test-service.dns-1587.svc wheezy_tcp@dns-test-service.dns-1587.svc wheezy_udp@_http._tcp.dns-test-service.dns-1587.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1587.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1587 jessie_tcp@dns-test-service.dns-1587 jessie_udp@dns-test-service.dns-1587.svc jessie_tcp@dns-test-service.dns-1587.svc jessie_udp@_http._tcp.dns-test-service.dns-1587.svc jessie_tcp@_http._tcp.dns-test-service.dns-1587.svc]

Sep 17 03:55:13.586: INFO: DNS probes using dns-1587/dns-test-d58c625d-694f-41fc-ac7a-2c4bcfc7f6fc succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:35.668 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":239,"skipped":4511,"failed":0}
SSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Sep 17 03:55:13.759: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:55:17.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9929" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":240,"skipped":4516,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 31 lines ...
• [SLOW TEST:7.116 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":241,"skipped":4522,"failed":0}
[sig-node] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 03:55:24.877: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 17 03:55:24.915: INFO: Waiting up to 5m0s for pod "var-expansion-f583aba3-a838-4b44-ad52-d4e05edbb53b" in namespace "var-expansion-6189" to be "Succeeded or Failed"
Sep 17 03:55:24.921: INFO: Pod "var-expansion-f583aba3-a838-4b44-ad52-d4e05edbb53b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.005841ms
Sep 17 03:55:26.925: INFO: Pod "var-expansion-f583aba3-a838-4b44-ad52-d4e05edbb53b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010010243s
STEP: Saw pod success
Sep 17 03:55:26.925: INFO: Pod "var-expansion-f583aba3-a838-4b44-ad52-d4e05edbb53b" satisfied condition "Succeeded or Failed"
Sep 17 03:55:26.927: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod var-expansion-f583aba3-a838-4b44-ad52-d4e05edbb53b container dapi-container: <nil>
STEP: delete the pod
Sep 17 03:55:26.942: INFO: Waiting for pod var-expansion-f583aba3-a838-4b44-ad52-d4e05edbb53b to disappear
Sep 17 03:55:26.946: INFO: Pod var-expansion-f583aba3-a838-4b44-ad52-d4e05edbb53b no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:55:26.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6189" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":242,"skipped":4522,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 74 lines ...
• [SLOW TEST:52.234 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":243,"skipped":4564,"failed":0}
SSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 19 lines ...
• [SLOW TEST:6.078 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":244,"skipped":4571,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Sep 17 03:56:27.780: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 17 03:56:27.780: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-9373 describe pod agnhost-primary-22j2d'
Sep 17 03:56:27.856: INFO: stderr: ""
Sep 17 03:56:27.856: INFO: stdout: "Name:         agnhost-primary-22j2d\nNamespace:    kubectl-9373\nPriority:     0\nNode:         kt2-4d7c9b85-175c-minion-group-b90v/10.128.0.4\nStart Time:   Fri, 17 Sep 2021 03:56:25 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.64.1.111\nIPs:\n  IP:           10.64.1.111\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://2e5b2df628e5dcac84e79d5f29587a1d44af7895f8303d9920ba3422afc59cdc\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 17 Sep 2021 03:56:26 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kpmgr (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-kpmgr:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-9373/agnhost-primary-22j2d to kt2-4d7c9b85-175c-minion-group-b90v\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
Sep 17 03:56:27.856: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-9373 describe rc agnhost-primary'
Sep 17 03:56:27.938: INFO: stderr: ""
Sep 17 03:56:27.938: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-9373\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  2s    replication-controller  Created pod: agnhost-primary-22j2d\n"
Sep 17 03:56:27.938: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-9373 describe service agnhost-primary'
Sep 17 03:56:28.014: INFO: stderr: ""
Sep 17 03:56:28.014: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-9373\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.0.5.28\nIPs:               10.0.5.28\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.1.111:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 17 03:56:28.018: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-9373 describe node kt2-4d7c9b85-175c-master'
Sep 17 03:56:28.113: INFO: stderr: ""
Sep 17 03:56:28.113: INFO: stdout: "Name:               kt2-4d7c9b85-175c-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-central1\n                    failure-domain.beta.kubernetes.io/zone=us-central1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kt2-4d7c9b85-175c-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-central1\n                    topology.kubernetes.io/zone=us-central1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Fri, 17 Sep 2021 02:38:35 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  kt2-4d7c9b85-175c-master\n  AcquireTime:     <unset>\n  RenewTime:       Fri, 17 Sep 2021 03:56:19 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Fri, 17 Sep 2021 02:38:44 +0000   Fri, 17 Sep 2021 02:38:44 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Fri, 17 Sep 2021 03:55:30 +0000   Fri, 17 Sep 2021 02:38:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 17 Sep 2021 03:55:30 +0000   Fri, 17 Sep 2021 02:38:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 17 Sep 2021 03:55:30 +0000   Fri, 17 Sep 2021 02:38:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 17 Sep 2021 03:55:30 +0000   Fri, 17 Sep 2021 02:38:55 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.128.0.2\n  ExternalIP:   34.69.105.80\n  InternalDNS:  kt2-4d7c9b85-175c-master.c.k8s-infra-e2e-boskos-119.internal\n  Hostname:     kt2-4d7c9b85-175c-master.c.k8s-infra-e2e-boskos-119.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3773744Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3517744Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 71d907b70c089fb084419613c08131ba\n  System UUID:                71d907b7-0c08-9fb0-8441-9613c08131ba\n  Boot ID:                    6c86656b-78bd-4aa4-981d-3b1971d9437b\n  Kernel Version:             5.4.129+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.6\n  Kubelet Version:            v1.23.0-alpha.2.69+2f10e6587c07ef\n  Kube-Proxy Version:         v1.23.0-alpha.2.69+2f10e6587c07ef\nPodCIDR:                      10.64.0.0/24\nPodCIDRs:                     10.64.0.0/24\nProviderID:                   gce://k8s-infra-e2e-boskos-119/us-central1-b/kt2-4d7c9b85-175c-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-server-events-kt2-4d7c9b85-175c-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         76m\n  kube-system                 etcd-server-kt2-4d7c9b85-175c-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         76m\n  kube-system                 fluentd-gcp-v3.2.0-khnsf                            100m (10%)    1 (100%)    200Mi (5%)       500Mi (14%)    76m\n  kube-system                 konnectivity-server-kt2-4d7c9b85-175c-master        25m (2%)      0 (0%)      0 (0%)           0 (0%)         75m\n  kube-system                 kube-addon-manager-kt2-4d7c9b85-175c-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         75m\n  kube-system                 kube-apiserver-kt2-4d7c9b85-175c-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         76m\n  kube-system                 kube-controller-manager-kt2-4d7c9b85-175c-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         76m\n  kube-system                 kube-scheduler-kt2-4d7c9b85-175c-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         76m\n  kube-system                 l7-lb-controller-kt2-4d7c9b85-175c-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         76m\n  kube-system                 metadata-proxy-v0.1-8hwmk                           32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      77m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests     Limits\n  --------                   --------     ------\n  cpu                        997m (99%)   1032m (103%)\n  memory                     345Mi (10%)  545Mi (15%)\n  ephemeral-storage          0 (0%)       0 (0%)\n  hugepages-2Mi              0 (0%)       0 (0%)\n  attachable-volumes-gce-pd  0            0\nEvents:                      <none>\n"
Sep 17 03:56:28.113: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-9373 describe namespace kubectl-9373'
Sep 17 03:56:28.185: INFO: stderr: ""
Sep 17 03:56:28.185: INFO: stdout: "Name:         kubectl-9373\nLabels:       e2e-framework=kubectl\n              e2e-run=6139c2b9-4f97-4733-badf-c136f982b8c8\n              kubernetes.io/metadata.name=kubectl-9373\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:56:28.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9373" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":346,"completed":245,"skipped":4572,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-7864c713-fc61-4ba9-8fc6-3d6a877f59e1
STEP: Creating a pod to test consume configMaps
Sep 17 03:56:28.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9751c07-4149-4427-b439-64b7bd4eafff" in namespace "configmap-3645" to be "Succeeded or Failed"
Sep 17 03:56:28.239: INFO: Pod "pod-configmaps-d9751c07-4149-4427-b439-64b7bd4eafff": Phase="Pending", Reason="", readiness=false. Elapsed: 5.196172ms
Sep 17 03:56:30.243: INFO: Pod "pod-configmaps-d9751c07-4149-4427-b439-64b7bd4eafff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009027287s
STEP: Saw pod success
Sep 17 03:56:30.243: INFO: Pod "pod-configmaps-d9751c07-4149-4427-b439-64b7bd4eafff" satisfied condition "Succeeded or Failed"
Sep 17 03:56:30.249: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-configmaps-d9751c07-4149-4427-b439-64b7bd4eafff container agnhost-container: <nil>
STEP: delete the pod
Sep 17 03:56:30.313: INFO: Waiting for pod pod-configmaps-d9751c07-4149-4427-b439-64b7bd4eafff to disappear
Sep 17 03:56:30.318: INFO: Pod pod-configmaps-d9751c07-4149-4427-b439-64b7bd4eafff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:56:30.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3645" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":246,"skipped":4601,"failed":0}
SSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 91 lines ...
• [SLOW TEST:42.869 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":247,"skipped":4608,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Sep 17 03:57:18.270: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=crd-publish-openapi-8242 explain e2e-test-crd-publish-openapi-5253-crds.spec'
Sep 17 03:57:18.467: INFO: stderr: ""
Sep 17 03:57:18.467: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-5253-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep 17 03:57:18.467: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=crd-publish-openapi-8242 explain e2e-test-crd-publish-openapi-5253-crds.spec.bars'
Sep 17 03:57:18.659: INFO: stderr: ""
Sep 17 03:57:18.659: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-5253-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep 17 03:57:18.660: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=crd-publish-openapi-8242 explain e2e-test-crd-publish-openapi-5253-crds.spec.bars2'
Sep 17 03:57:18.838: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:57:23.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8242" for this suite.

• [SLOW TEST:9.978 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":248,"skipped":4628,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:28.086 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":249,"skipped":4639,"failed":0}
S
------------------------------
[sig-node] Pods Extended Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [sig-node] Pods Extended
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:57:51.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8281" for this suite.
•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":250,"skipped":4640,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 13 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 17 03:57:54.004: INFO: Successfully updated pod "pod-update-activedeadlineseconds-97e4b1e7-8044-4d73-a3b3-e7ac5a0de215"
Sep 17 03:57:54.004: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-97e4b1e7-8044-4d73-a3b3-e7ac5a0de215" in namespace "pods-3282" to be "terminated due to deadline exceeded"
Sep 17 03:57:54.008: INFO: Pod "pod-update-activedeadlineseconds-97e4b1e7-8044-4d73-a3b3-e7ac5a0de215": Phase="Running", Reason="", readiness=true. Elapsed: 3.503851ms
Sep 17 03:57:56.013: INFO: Pod "pod-update-activedeadlineseconds-97e4b1e7-8044-4d73-a3b3-e7ac5a0de215": Phase="Running", Reason="", readiness=true. Elapsed: 2.008963276s
Sep 17 03:57:58.018: INFO: Pod "pod-update-activedeadlineseconds-97e4b1e7-8044-4d73-a3b3-e7ac5a0de215": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.013201912s
Sep 17 03:57:58.018: INFO: Pod "pod-update-activedeadlineseconds-97e4b1e7-8044-4d73-a3b3-e7ac5a0de215" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:57:58.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3282" for this suite.

• [SLOW TEST:6.665 seconds]
[sig-node] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":251,"skipped":4676,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 03:58:00.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7241" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":252,"skipped":4681,"failed":0}
SSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 71 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Trying to schedule Pod with nonempty NodeSelector.
I0917 04:02:11.690328    2890 boskos.go:86] Sending heartbeat to Boskos
I0917 04:07:11.709559    2890 boskos.go:86] Sending heartbeat to Boskos
Sep 17 04:08:00.558: INFO: Timed out waiting for the following pods to schedule
Sep 17 04:08:00.558: INFO: kube-system/konnectivity-agent-d96tx
Sep 17 04:08:00.558: FAIL: Timed out after 10m0s waiting for stable cluster.

Full Stack Trace
k8s.io/kubernetes/test/e2e/scheduling.glob..func4.6()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436 +0x85
k8s.io/kubernetes/test/e2e.RunE2ETests(0x229aa57)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:128 +0x697
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 17 04:08:00.558: Timed out after 10m0s waiting for stable cluster.

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436
------------------------------
{"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":346,"completed":252,"skipped":4685,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Service endpoints latency
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 422 lines ...
• [SLOW TEST:10.778 seconds]
[sig-network] Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":346,"completed":253,"skipped":4685,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 04:08:12.026: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9145fe09-b511-4783-a77f-62491cd1770d" in namespace "downward-api-2747" to be "Succeeded or Failed"
Sep 17 04:08:12.035: INFO: Pod "downwardapi-volume-9145fe09-b511-4783-a77f-62491cd1770d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.426539ms
Sep 17 04:08:14.039: INFO: Pod "downwardapi-volume-9145fe09-b511-4783-a77f-62491cd1770d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012966538s
STEP: Saw pod success
Sep 17 04:08:14.039: INFO: Pod "downwardapi-volume-9145fe09-b511-4783-a77f-62491cd1770d" satisfied condition "Succeeded or Failed"
Sep 17 04:08:14.041: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-9145fe09-b511-4783-a77f-62491cd1770d container client-container: <nil>
STEP: delete the pod
Sep 17 04:08:14.057: INFO: Waiting for pod downwardapi-volume-9145fe09-b511-4783-a77f-62491cd1770d to disappear
Sep 17 04:08:14.060: INFO: Pod downwardapi-volume-9145fe09-b511-4783-a77f-62491cd1770d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:08:14.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2747" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":254,"skipped":4689,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-010d7e1b-a286-4d53-9515-4548142c05df
STEP: Creating a pod to test consume configMaps
Sep 17 04:08:14.106: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-569142c3-5fc0-4ae7-b41b-9859cc1caac5" in namespace "projected-5033" to be "Succeeded or Failed"
Sep 17 04:08:14.110: INFO: Pod "pod-projected-configmaps-569142c3-5fc0-4ae7-b41b-9859cc1caac5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.827695ms
Sep 17 04:08:16.117: INFO: Pod "pod-projected-configmaps-569142c3-5fc0-4ae7-b41b-9859cc1caac5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011301526s
STEP: Saw pod success
Sep 17 04:08:16.117: INFO: Pod "pod-projected-configmaps-569142c3-5fc0-4ae7-b41b-9859cc1caac5" satisfied condition "Succeeded or Failed"
Sep 17 04:08:16.121: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-configmaps-569142c3-5fc0-4ae7-b41b-9859cc1caac5 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 04:08:16.136: INFO: Waiting for pod pod-projected-configmaps-569142c3-5fc0-4ae7-b41b-9859cc1caac5 to disappear
Sep 17 04:08:16.139: INFO: Pod pod-projected-configmaps-569142c3-5fc0-4ae7-b41b-9859cc1caac5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:08:16.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5033" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":255,"skipped":4720,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Sep 17 04:08:19.188: INFO: stderr: ""
Sep 17 04:08:19.188: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:08:19.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-736" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":346,"completed":256,"skipped":4783,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 63 lines ...
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0917 04:12:11.725847    2890 boskos.go:86] Sending heartbeat to Boskos
I0917 04:17:11.741837    2890 boskos.go:86] Sending heartbeat to Boskos
Sep 17 04:18:19.378: INFO: Timed out waiting for the following pods to schedule
Sep 17 04:18:19.378: INFO: kube-system/konnectivity-agent-d96tx
Sep 17 04:18:19.378: FAIL: Timed out after 10m0s waiting for stable cluster.

Full Stack Trace
k8s.io/kubernetes/test/e2e/scheduling.glob..func4.5()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323 +0x8b
k8s.io/kubernetes/test/e2e.RunE2ETests(0x229aa57)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:128 +0x697
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 17 04:18:19.378: Timed out after 10m0s waiting for stable cluster.

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323
------------------------------
{"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":346,"completed":256,"skipped":4903,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should validate Statefulset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 42 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should validate Statefulset Status endpoints [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":257,"skipped":4919,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 15 lines ...
Sep 17 04:18:45.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:18:57.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-83" for this suite.
STEP: Destroying namespace "webhook-83-markers" for this suite.
... skipping 3 lines ...
• [SLOW TEST:17.837 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":258,"skipped":4930,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] CronJob 
  should not schedule jobs when suspended [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 17 lines ...
• [SLOW TEST:300.071 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not schedule jobs when suspended [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":259,"skipped":4937,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 14 lines ...
STEP: Creating configMap with name cm-test-opt-create-b628534c-548a-432d-b8da-dae8c320f301
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:02.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2794" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":260,"skipped":4938,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 17 04:24:02.174: INFO: Asynchronously running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-8692 proxy --unix-socket=/tmp/kubectl-proxy-unix1226109969/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:02.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8692" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":346,"completed":261,"skipped":4950,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:06.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5220" for this suite.
STEP: Destroying namespace "webhook-5220-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":262,"skipped":4963,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Sep 17 04:24:10.485: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:10.492: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:10.527: INFO: Unable to read jessie_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:10.535: INFO: Unable to read jessie_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:10.543: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:10.578: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:10.623: INFO: Lookups using dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c failed for: [wheezy_udp@dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_udp@dns-test-service.dns-2399.svc.cluster.local jessie_tcp@dns-test-service.dns-2399.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local]

Sep 17 04:24:15.632: INFO: Unable to read wheezy_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.637: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.645: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.653: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.692: INFO: Unable to read jessie_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.701: INFO: Unable to read jessie_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.709: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:15.745: INFO: Lookups using dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c failed for: [wheezy_udp@dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_udp@dns-test-service.dns-2399.svc.cluster.local jessie_tcp@dns-test-service.dns-2399.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local]

Sep 17 04:24:20.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.641: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.647: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.653: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.688: INFO: Unable to read jessie_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.694: INFO: Unable to read jessie_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.702: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.709: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:20.737: INFO: Lookups using dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c failed for: [wheezy_udp@dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_udp@dns-test-service.dns-2399.svc.cluster.local jessie_tcp@dns-test-service.dns-2399.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local]

Sep 17 04:24:25.632: INFO: Unable to read wheezy_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.643: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.651: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.661: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.709: INFO: Unable to read jessie_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.715: INFO: Unable to read jessie_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.723: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.733: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:25.766: INFO: Lookups using dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c failed for: [wheezy_udp@dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_udp@dns-test-service.dns-2399.svc.cluster.local jessie_tcp@dns-test-service.dns-2399.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local]

Sep 17 04:24:30.632: INFO: Unable to read wheezy_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.639: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.644: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.652: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.688: INFO: Unable to read jessie_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.694: INFO: Unable to read jessie_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.699: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.706: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:30.757: INFO: Lookups using dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c failed for: [wheezy_udp@dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_udp@dns-test-service.dns-2399.svc.cluster.local jessie_tcp@dns-test-service.dns-2399.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local]

Sep 17 04:24:35.633: INFO: Unable to read wheezy_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.641: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.645: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.652: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.686: INFO: Unable to read jessie_udp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.694: INFO: Unable to read jessie_tcp@dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.700: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.705: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local from pod dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c: the server could not find the requested resource (get pods dns-test-a96da22b-9a64-4764-ae03-68e97535f92c)
Sep 17 04:24:35.738: INFO: Lookups using dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c failed for: [wheezy_udp@dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@dns-test-service.dns-2399.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_udp@dns-test-service.dns-2399.svc.cluster.local jessie_tcp@dns-test-service.dns-2399.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2399.svc.cluster.local]

Sep 17 04:24:40.738: INFO: DNS probes using dns-2399/dns-test-a96da22b-9a64-4764-ae03-68e97535f92c succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:34.602 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":346,"completed":263,"skipped":4987,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 11 lines ...
STEP: Updating configmap projected-configmap-test-upd-8536826b-0bbf-45a1-aa9b-0ced8d5f4c20
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:45.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9553" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":264,"skipped":4991,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 04:24:45.011: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 17 04:24:45.049: INFO: Waiting up to 5m0s for pod "pod-266f6a0b-6342-42ed-931f-af9f034b6b9b" in namespace "emptydir-7072" to be "Succeeded or Failed"
Sep 17 04:24:45.055: INFO: Pod "pod-266f6a0b-6342-42ed-931f-af9f034b6b9b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.312931ms
Sep 17 04:24:47.058: INFO: Pod "pod-266f6a0b-6342-42ed-931f-af9f034b6b9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008626491s
STEP: Saw pod success
Sep 17 04:24:47.058: INFO: Pod "pod-266f6a0b-6342-42ed-931f-af9f034b6b9b" satisfied condition "Succeeded or Failed"
Sep 17 04:24:47.060: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-266f6a0b-6342-42ed-931f-af9f034b6b9b container test-container: <nil>
STEP: delete the pod
Sep 17 04:24:47.079: INFO: Waiting for pod pod-266f6a0b-6342-42ed-931f-af9f034b6b9b to disappear
Sep 17 04:24:47.082: INFO: Pod pod-266f6a0b-6342-42ed-931f-af9f034b6b9b no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:47.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7072" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":265,"skipped":5003,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 04:24:47.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7251668-f0b9-4c9e-87f4-06b7b611b1d4" in namespace "projected-6705" to be "Succeeded or Failed"
Sep 17 04:24:47.130: INFO: Pod "downwardapi-volume-f7251668-f0b9-4c9e-87f4-06b7b611b1d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.82052ms
Sep 17 04:24:49.135: INFO: Pod "downwardapi-volume-f7251668-f0b9-4c9e-87f4-06b7b611b1d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007481236s
STEP: Saw pod success
Sep 17 04:24:49.135: INFO: Pod "downwardapi-volume-f7251668-f0b9-4c9e-87f4-06b7b611b1d4" satisfied condition "Succeeded or Failed"
Sep 17 04:24:49.137: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-f7251668-f0b9-4c9e-87f4-06b7b611b1d4 container client-container: <nil>
STEP: delete the pod
Sep 17 04:24:49.153: INFO: Waiting for pod downwardapi-volume-f7251668-f0b9-4c9e-87f4-06b7b611b1d4 to disappear
Sep 17 04:24:49.156: INFO: Pod downwardapi-volume-f7251668-f0b9-4c9e-87f4-06b7b611b1d4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:49.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6705" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":266,"skipped":5003,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 04:24:49.205: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-db3eaee5-7f66-424d-832f-ad84bec98d80" in namespace "security-context-test-6969" to be "Succeeded or Failed"
Sep 17 04:24:49.209: INFO: Pod "busybox-privileged-false-db3eaee5-7f66-424d-832f-ad84bec98d80": Phase="Pending", Reason="", readiness=false. Elapsed: 3.362305ms
Sep 17 04:24:51.212: INFO: Pod "busybox-privileged-false-db3eaee5-7f66-424d-832f-ad84bec98d80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006729435s
Sep 17 04:24:51.212: INFO: Pod "busybox-privileged-false-db3eaee5-7f66-424d-832f-ad84bec98d80" satisfied condition "Succeeded or Failed"
Sep 17 04:24:51.218: INFO: Got logs for pod "busybox-privileged-false-db3eaee5-7f66-424d-832f-ad84bec98d80": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:51.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6969" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":267,"skipped":5016,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 04:24:51.225: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 17 04:24:51.266: INFO: Waiting up to 5m0s for pod "pod-f5271384-08e0-4231-92d4-1249b4570944" in namespace "emptydir-5326" to be "Succeeded or Failed"
Sep 17 04:24:51.271: INFO: Pod "pod-f5271384-08e0-4231-92d4-1249b4570944": Phase="Pending", Reason="", readiness=false. Elapsed: 5.048179ms
Sep 17 04:24:53.275: INFO: Pod "pod-f5271384-08e0-4231-92d4-1249b4570944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008838575s
STEP: Saw pod success
Sep 17 04:24:53.275: INFO: Pod "pod-f5271384-08e0-4231-92d4-1249b4570944" satisfied condition "Succeeded or Failed"
Sep 17 04:24:53.277: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-f5271384-08e0-4231-92d4-1249b4570944 container test-container: <nil>
STEP: delete the pod
Sep 17 04:24:53.292: INFO: Waiting for pod pod-f5271384-08e0-4231-92d4-1249b4570944 to disappear
Sep 17 04:24:53.294: INFO: Pod pod-f5271384-08e0-4231-92d4-1249b4570944 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:24:53.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5326" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":268,"skipped":5018,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 27 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":269,"skipped":5066,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 04:25:56.469: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:04.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5873" for this suite.

• [SLOW TEST:8.050 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":270,"skipped":5082,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-c42e63c7-57ca-40e8-a193-0d189ae2749c
STEP: Creating a pod to test consume secrets
Sep 17 04:26:04.560: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-864f3e96-4fd9-4847-bb94-b0e16c419289" in namespace "projected-3293" to be "Succeeded or Failed"
Sep 17 04:26:04.563: INFO: Pod "pod-projected-secrets-864f3e96-4fd9-4847-bb94-b0e16c419289": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08691ms
Sep 17 04:26:06.567: INFO: Pod "pod-projected-secrets-864f3e96-4fd9-4847-bb94-b0e16c419289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006493267s
STEP: Saw pod success
Sep 17 04:26:06.567: INFO: Pod "pod-projected-secrets-864f3e96-4fd9-4847-bb94-b0e16c419289" satisfied condition "Succeeded or Failed"
Sep 17 04:26:06.576: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-secrets-864f3e96-4fd9-4847-bb94-b0e16c419289 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 04:26:06.594: INFO: Waiting for pod pod-projected-secrets-864f3e96-4fd9-4847-bb94-b0e16c419289 to disappear
Sep 17 04:26:06.596: INFO: Pod pod-projected-secrets-864f3e96-4fd9-4847-bb94-b0e16c419289 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:06.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3293" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":271,"skipped":5100,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 04:26:06.642: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:07.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4248" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":346,"completed":272,"skipped":5117,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
• [SLOW TEST:10.507 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":273,"skipped":5130,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-1873/configmap-test-73f6a866-a6ba-448b-99f6-d7083b9f518e
STEP: Creating a pod to test consume configMaps
Sep 17 04:26:17.815: INFO: Waiting up to 5m0s for pod "pod-configmaps-331ef8c1-96ed-438d-8a0c-7fa734f15604" in namespace "configmap-1873" to be "Succeeded or Failed"
Sep 17 04:26:17.819: INFO: Pod "pod-configmaps-331ef8c1-96ed-438d-8a0c-7fa734f15604": Phase="Pending", Reason="", readiness=false. Elapsed: 3.761978ms
Sep 17 04:26:19.823: INFO: Pod "pod-configmaps-331ef8c1-96ed-438d-8a0c-7fa734f15604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008102952s
STEP: Saw pod success
Sep 17 04:26:19.823: INFO: Pod "pod-configmaps-331ef8c1-96ed-438d-8a0c-7fa734f15604" satisfied condition "Succeeded or Failed"
Sep 17 04:26:19.825: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-configmaps-331ef8c1-96ed-438d-8a0c-7fa734f15604 container env-test: <nil>
STEP: delete the pod
Sep 17 04:26:19.841: INFO: Waiting for pod pod-configmaps-331ef8c1-96ed-438d-8a0c-7fa734f15604 to disappear
Sep 17 04:26:19.845: INFO: Pod pod-configmaps-331ef8c1-96ed-438d-8a0c-7fa734f15604 no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:19.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1873" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":274,"skipped":5172,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 17 04:26:19.889: INFO: Asynchronously running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-4985 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:19.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4985" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":346,"completed":275,"skipped":5194,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
• [SLOW TEST:16.141 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":276,"skipped":5245,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 04:26:36.098: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-0d581a46-5850-49f1-a6cf-4e37d5a13f07
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:36.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2223" for this suite.
•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":277,"skipped":5288,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:40.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3095" for this suite.
STEP: Destroying namespace "webhook-3095-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":278,"skipped":5288,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-49124a07-5299-48a7-9204-ac1c89768245
STEP: Creating secret with name secret-projected-all-test-volume-4be01fd8-7e52-4f9b-9ad0-2a445fdce195
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep 17 04:26:40.241: INFO: Waiting up to 5m0s for pod "projected-volume-1352a1da-02b7-4b12-befd-4d742c3924b0" in namespace "projected-229" to be "Succeeded or Failed"
Sep 17 04:26:40.243: INFO: Pod "projected-volume-1352a1da-02b7-4b12-befd-4d742c3924b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.545231ms
Sep 17 04:26:42.247: INFO: Pod "projected-volume-1352a1da-02b7-4b12-befd-4d742c3924b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006145317s
STEP: Saw pod success
Sep 17 04:26:42.247: INFO: Pod "projected-volume-1352a1da-02b7-4b12-befd-4d742c3924b0" satisfied condition "Succeeded or Failed"
Sep 17 04:26:42.249: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod projected-volume-1352a1da-02b7-4b12-befd-4d742c3924b0 container projected-all-volume-test: <nil>
STEP: delete the pod
Sep 17 04:26:42.266: INFO: Waiting for pod projected-volume-1352a1da-02b7-4b12-befd-4d742c3924b0 to disappear
Sep 17 04:26:42.269: INFO: Pod projected-volume-1352a1da-02b7-4b12-befd-4d742c3924b0 no longer exists
[AfterEach] [sig-storage] Projected combined
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:42.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-229" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":279,"skipped":5293,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 15 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:46.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9830" for this suite.
•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":280,"skipped":5300,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-9ca85172-c74a-4595-8067-ea1df3938836
STEP: Creating a pod to test consume configMaps
Sep 17 04:26:47.066: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ceca27a-2701-4307-adb8-a310b9b091ec" in namespace "projected-3122" to be "Succeeded or Failed"
Sep 17 04:26:47.070: INFO: Pod "pod-projected-configmaps-7ceca27a-2701-4307-adb8-a310b9b091ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326782ms
Sep 17 04:26:49.073: INFO: Pod "pod-projected-configmaps-7ceca27a-2701-4307-adb8-a310b9b091ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00730417s
STEP: Saw pod success
Sep 17 04:26:49.073: INFO: Pod "pod-projected-configmaps-7ceca27a-2701-4307-adb8-a310b9b091ec" satisfied condition "Succeeded or Failed"
Sep 17 04:26:49.075: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-configmaps-7ceca27a-2701-4307-adb8-a310b9b091ec container agnhost-container: <nil>
STEP: delete the pod
Sep 17 04:26:49.091: INFO: Waiting for pod pod-projected-configmaps-7ceca27a-2701-4307-adb8-a310b9b091ec to disappear
Sep 17 04:26:49.095: INFO: Pod pod-projected-configmaps-7ceca27a-2701-4307-adb8-a310b9b091ec no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:49.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3122" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":281,"skipped":5303,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should delete a collection of pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 16 lines ...
Sep 17 04:26:50.214: INFO: Pod quantity 3 is different from expected quantity 0
Sep 17 04:26:51.214: INFO: Pod quantity 3 is different from expected quantity 0
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:26:52.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3922" for this suite.
•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":282,"skipped":5340,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 58 lines ...
• [SLOW TEST:12.378 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":283,"skipped":5350,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 147 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should scale a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":346,"completed":284,"skipped":5370,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:27:22.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7008" for this suite.
STEP: Destroying namespace "webhook-7008-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":285,"skipped":5375,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 04:27:22.605: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 04:27:24.668: INFO: Deleting pod "var-expansion-773d7a4f-2a6f-4231-9261-13e10fc562d4" in namespace "var-expansion-8399"
Sep 17 04:27:24.675: INFO: Wait up to 5m0s for pod "var-expansion-773d7a4f-2a6f-4231-9261-13e10fc562d4" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:27:26.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8399" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":286,"skipped":5434,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 35 lines ...
• [SLOW TEST:8.686 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":287,"skipped":5442,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  Deployment should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 24 lines ...
Sep 17 04:27:37.572: INFO: Pod "test-new-deployment-5c557bc5bf-vqbqw" is not available:
&Pod{ObjectMeta:{test-new-deployment-5c557bc5bf-vqbqw test-new-deployment-5c557bc5bf- deployment-1390  3306e178-f456-4573-980c-68c7e5112f3d 25881 0 2021-09-17 04:27:37 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5c557bc5bf] map[] [{apps/v1 ReplicaSet test-new-deployment-5c557bc5bf e2f84ab3-7dcb-4939-85d6-268386e81acb 0xc004f186f0 0xc004f186f1}] []  [{kube-controller-manager Update v1 2021-09-17 04:27:37 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e2f84ab3-7dcb-4939-85d6-268386e81acb\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 04:27:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-456wz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-456wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-4d7c9b85-175c-minion-group-b90v,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:27:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:27:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:27:37 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:27:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-17 04:27:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:27:37.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1390" for this suite.
•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":288,"skipped":5457,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 04:27:37.624: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 17 04:27:37.706: INFO: Waiting up to 5m0s for pod "pod-e05bf69b-3bcf-440e-916d-c44df9c7fce8" in namespace "emptydir-3133" to be "Succeeded or Failed"
Sep 17 04:27:37.712: INFO: Pod "pod-e05bf69b-3bcf-440e-916d-c44df9c7fce8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.785001ms
Sep 17 04:27:39.716: INFO: Pod "pod-e05bf69b-3bcf-440e-916d-c44df9c7fce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010677311s
STEP: Saw pod success
Sep 17 04:27:39.716: INFO: Pod "pod-e05bf69b-3bcf-440e-916d-c44df9c7fce8" satisfied condition "Succeeded or Failed"
Sep 17 04:27:39.719: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-e05bf69b-3bcf-440e-916d-c44df9c7fce8 container test-container: <nil>
STEP: delete the pod
Sep 17 04:27:39.735: INFO: Waiting for pod pod-e05bf69b-3bcf-440e-916d-c44df9c7fce8 to disappear
Sep 17 04:27:39.739: INFO: Pod pod-e05bf69b-3bcf-440e-916d-c44df9c7fce8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:27:39.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3133" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":289,"skipped":5477,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-network] EndpointSlice 
  should support creating EndpointSlice API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 24 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:27:39.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-7781" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":290,"skipped":5480,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 04:27:39.857: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 17 04:27:39.902: INFO: Waiting up to 5m0s for pod "pod-93d02783-a13b-4331-88b5-a273ef520529" in namespace "emptydir-9006" to be "Succeeded or Failed"
Sep 17 04:27:39.907: INFO: Pod "pod-93d02783-a13b-4331-88b5-a273ef520529": Phase="Pending", Reason="", readiness=false. Elapsed: 5.291473ms
Sep 17 04:27:41.914: INFO: Pod "pod-93d02783-a13b-4331-88b5-a273ef520529": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01199334s
STEP: Saw pod success
Sep 17 04:27:41.914: INFO: Pod "pod-93d02783-a13b-4331-88b5-a273ef520529" satisfied condition "Succeeded or Failed"
Sep 17 04:27:41.921: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-93d02783-a13b-4331-88b5-a273ef520529 container test-container: <nil>
STEP: delete the pod
Sep 17 04:27:41.958: INFO: Waiting for pod pod-93d02783-a13b-4331-88b5-a273ef520529 to disappear
Sep 17 04:27:41.961: INFO: Pod pod-93d02783-a13b-4331-88b5-a273ef520529 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:27:41.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9006" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":291,"skipped":5481,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-7465
I0917 04:27:42.089337   97125 runners.go:193] Created replication controller with name: externalname-service, namespace: services-7465, replica count: 2
Sep 17 04:27:45.140: INFO: Creating new exec pod
I0917 04:27:45.140131   97125 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 17 04:27:48.159: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7465 exec execpodsvrwq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 17 04:27:50.339: INFO: rc: 1
Sep 17 04:27:50.339: INFO: Service reachability failing with error: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7465 exec execpodsvrwq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 04:27:51.340: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7465 exec execpodsvrwq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 17 04:27:53.602: INFO: rc: 1
Sep 17 04:27:53.602: INFO: Service reachability failing with error: error running /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7465 exec execpodsvrwq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 04:27:54.340: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7465 exec execpodsvrwq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 17 04:27:54.567: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Sep 17 04:27:54.567: INFO: stdout: "externalname-service-ms7tp"
Sep 17 04:27:54.567: INFO: Running '/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubectl --server=https://34.69.105.80 --kubeconfig=/logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7465 exec execpodsvrwq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.194.125 80'
... skipping 19 lines ...
• [SLOW TEST:14.346 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":292,"skipped":5514,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-network] EndpointSlice 
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 19 lines ...
• [SLOW TEST:30.185 seconds]
[sig-network] EndpointSlice
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":293,"skipped":5518,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints 
  verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 36 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PriorityClass endpoints
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673
    verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":294,"skipped":5534,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 14 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:29:28.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8705" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":295,"skipped":5534,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:242.609 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":296,"skipped":5543,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 15 lines ...
• [SLOW TEST:7.056 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":297,"skipped":5556,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context 
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 2 lines ...
Sep 17 04:33:38.519: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 17 04:33:38.554: INFO: Waiting up to 5m0s for pod "security-context-76fa6837-8758-4fea-b2c8-58fb0e3621c7" in namespace "security-context-5056" to be "Succeeded or Failed"
Sep 17 04:33:38.558: INFO: Pod "security-context-76fa6837-8758-4fea-b2c8-58fb0e3621c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.599138ms
Sep 17 04:33:40.562: INFO: Pod "security-context-76fa6837-8758-4fea-b2c8-58fb0e3621c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008118165s
STEP: Saw pod success
Sep 17 04:33:40.562: INFO: Pod "security-context-76fa6837-8758-4fea-b2c8-58fb0e3621c7" satisfied condition "Succeeded or Failed"
Sep 17 04:33:40.565: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod security-context-76fa6837-8758-4fea-b2c8-58fb0e3621c7 container test-container: <nil>
STEP: delete the pod
Sep 17 04:33:40.640: INFO: Waiting for pod security-context-76fa6837-8758-4fea-b2c8-58fb0e3621c7 to disappear
Sep 17 04:33:40.645: INFO: Pod security-context-76fa6837-8758-4fea-b2c8-58fb0e3621c7 no longer exists
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:33:40.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-5056" for this suite.
•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":298,"skipped":5593,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 04:33:40.654: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 17 04:33:40.715: INFO: Waiting up to 5m0s for pod "pod-5196f02b-8fc3-40df-9c7c-9b22ffb8fb6c" in namespace "emptydir-5925" to be "Succeeded or Failed"
Sep 17 04:33:40.724: INFO: Pod "pod-5196f02b-8fc3-40df-9c7c-9b22ffb8fb6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.404057ms
Sep 17 04:33:42.728: INFO: Pod "pod-5196f02b-8fc3-40df-9c7c-9b22ffb8fb6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012771545s
STEP: Saw pod success
Sep 17 04:33:42.728: INFO: Pod "pod-5196f02b-8fc3-40df-9c7c-9b22ffb8fb6c" satisfied condition "Succeeded or Failed"
Sep 17 04:33:42.732: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-5196f02b-8fc3-40df-9c7c-9b22ffb8fb6c container test-container: <nil>
STEP: delete the pod
Sep 17 04:33:42.750: INFO: Waiting for pod pod-5196f02b-8fc3-40df-9c7c-9b22ffb8fb6c to disappear
Sep 17 04:33:42.752: INFO: Pod pod-5196f02b-8fc3-40df-9c7c-9b22ffb8fb6c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:33:42.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5925" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":299,"skipped":5601,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events
... skipping 14 lines ...
STEP: check that the list of events matches the requested quantity
Sep 17 04:33:42.825: INFO: requesting list of events to confirm quantity
[AfterEach] [sig-instrumentation] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:33:42.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7236" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":300,"skipped":5632,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-d3561186-a200-4be5-98e9-813622ab182a
STEP: Creating a pod to test consume secrets
Sep 17 04:33:42.873: INFO: Waiting up to 5m0s for pod "pod-secrets-91954744-f7ba-468b-9290-a86d9285722f" in namespace "secrets-9913" to be "Succeeded or Failed"
Sep 17 04:33:42.877: INFO: Pod "pod-secrets-91954744-f7ba-468b-9290-a86d9285722f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.424622ms
Sep 17 04:33:44.880: INFO: Pod "pod-secrets-91954744-f7ba-468b-9290-a86d9285722f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007494703s
STEP: Saw pod success
Sep 17 04:33:44.880: INFO: Pod "pod-secrets-91954744-f7ba-468b-9290-a86d9285722f" satisfied condition "Succeeded or Failed"
Sep 17 04:33:44.882: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-secrets-91954744-f7ba-468b-9290-a86d9285722f container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 04:33:44.915: INFO: Waiting for pod pod-secrets-91954744-f7ba-468b-9290-a86d9285722f to disappear
Sep 17 04:33:44.923: INFO: Pod pod-secrets-91954744-f7ba-468b-9290-a86d9285722f no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:33:44.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9913" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":301,"skipped":5648,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-node] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-3609/secret-test-510f7cce-696b-45e0-85b9-fe0c40b62130
STEP: Creating a pod to test consume secrets
Sep 17 04:33:44.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-3ee831db-8203-4315-9b8d-0f72ece9a1d1" in namespace "secrets-3609" to be "Succeeded or Failed"
Sep 17 04:33:44.974: INFO: Pod "pod-configmaps-3ee831db-8203-4315-9b8d-0f72ece9a1d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.600435ms
Sep 17 04:33:46.977: INFO: Pod "pod-configmaps-3ee831db-8203-4315-9b8d-0f72ece9a1d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007960718s
STEP: Saw pod success
Sep 17 04:33:46.977: INFO: Pod "pod-configmaps-3ee831db-8203-4315-9b8d-0f72ece9a1d1" satisfied condition "Succeeded or Failed"
Sep 17 04:33:46.979: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-configmaps-3ee831db-8203-4315-9b8d-0f72ece9a1d1 container env-test: <nil>
STEP: delete the pod
Sep 17 04:33:46.994: INFO: Waiting for pod pod-configmaps-3ee831db-8203-4315-9b8d-0f72ece9a1d1 to disappear
Sep 17 04:33:47.006: INFO: Pod pod-configmaps-3ee831db-8203-4315-9b8d-0f72ece9a1d1 no longer exists
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:33:47.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3609" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":302,"skipped":5654,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:33:47.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3641" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":303,"skipped":5696,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:7.216 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":304,"skipped":5698,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  should run the lifecycle of a Deployment [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 119 lines ...
• [SLOW TEST:6.944 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":305,"skipped":5711,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 04:34:01.234: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 17 04:34:01.279: INFO: Waiting up to 5m0s for pod "pod-8433733c-1c1b-48aa-8b65-bd4c8a9d0aff" in namespace "emptydir-5842" to be "Succeeded or Failed"
Sep 17 04:34:01.282: INFO: Pod "pod-8433733c-1c1b-48aa-8b65-bd4c8a9d0aff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.768872ms
Sep 17 04:34:03.285: INFO: Pod "pod-8433733c-1c1b-48aa-8b65-bd4c8a9d0aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006160073s
STEP: Saw pod success
Sep 17 04:34:03.285: INFO: Pod "pod-8433733c-1c1b-48aa-8b65-bd4c8a9d0aff" satisfied condition "Succeeded or Failed"
Sep 17 04:34:03.287: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-8433733c-1c1b-48aa-8b65-bd4c8a9d0aff container test-container: <nil>
STEP: delete the pod
Sep 17 04:34:03.300: INFO: Waiting for pod pod-8433733c-1c1b-48aa-8b65-bd4c8a9d0aff to disappear
Sep 17 04:34:03.303: INFO: Pod pod-8433733c-1c1b-48aa-8b65-bd4c8a9d0aff no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:03.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5842" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":306,"skipped":5725,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 74 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:07.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3614" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":346,"completed":307,"skipped":5727,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  should validate Deployment Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 62 lines ...
Sep 17 04:34:09.692: INFO: Pod "test-deployment-cvccd-d9bb78c49-vxtwz" is available:
&Pod{ObjectMeta:{test-deployment-cvccd-d9bb78c49-vxtwz test-deployment-cvccd-d9bb78c49- deployment-3570  cc93505c-3fc4-4a55-99c8-76d8a29fbe69 27288 0 2021-09-17 04:34:07 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:d9bb78c49] map[] [{apps/v1 ReplicaSet test-deployment-cvccd-d9bb78c49 b70e6496-4813-4172-b2fa-347b228a306f 0xc005891730 0xc005891731}] []  [{kube-controller-manager Update v1 2021-09-17 04:34:07 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b70e6496-4813-4172-b2fa-347b228a306f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 04:34:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.37\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lwt8q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lwt8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-4d7c9b85-175c-minion-group-94gp,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:34:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:34:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 04:34:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:10.64.3.37,StartTime:2021-09-17 04:34:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-17 04:34:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://5be6e64288d492a16222a266b9cdb982c25973f62da4059f7ea5a92798921e99,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.37,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:09.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3570" for this suite.
•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":308,"skipped":5747,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 04:34:09.699: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 17 04:34:09.734: INFO: Waiting up to 5m0s for pod "pod-6de4f2f5-c2e4-4529-9c12-95dc99b9d109" in namespace "emptydir-1533" to be "Succeeded or Failed"
Sep 17 04:34:09.736: INFO: Pod "pod-6de4f2f5-c2e4-4529-9c12-95dc99b9d109": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12156ms
Sep 17 04:34:11.739: INFO: Pod "pod-6de4f2f5-c2e4-4529-9c12-95dc99b9d109": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005172009s
STEP: Saw pod success
Sep 17 04:34:11.739: INFO: Pod "pod-6de4f2f5-c2e4-4529-9c12-95dc99b9d109" satisfied condition "Succeeded or Failed"
Sep 17 04:34:11.741: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-b90v pod pod-6de4f2f5-c2e4-4529-9c12-95dc99b9d109 container test-container: <nil>
STEP: delete the pod
Sep 17 04:34:11.780: INFO: Waiting for pod pod-6de4f2f5-c2e4-4529-9c12-95dc99b9d109 to disappear
Sep 17 04:34:11.787: INFO: Pod pod-6de4f2f5-c2e4-4529-9c12-95dc99b9d109 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:11.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1533" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":309,"skipped":5787,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
Sep 17 04:34:11.850: INFO: The status of Pod pod-exec-websocket-fcdb884d-b0c3-4f03-a946-2c9dfe5ffe36 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 04:34:13.854: INFO: The status of Pod pod-exec-websocket-fcdb884d-b0c3-4f03-a946-2c9dfe5ffe36 is Running (Ready = true)
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:13.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8921" for this suite.
•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":310,"skipped":5809,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 20 lines ...
• [SLOW TEST:17.076 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":311,"skipped":5823,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:6.803 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":312,"skipped":5841,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:41.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5127" for this suite.
STEP: Destroying namespace "webhook-5127-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":313,"skipped":5842,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 04:34:41.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-98158ed1-5be6-4800-b9c1-0fead71a8775" in namespace "downward-api-4957" to be "Succeeded or Failed"
Sep 17 04:34:41.484: INFO: Pod "downwardapi-volume-98158ed1-5be6-4800-b9c1-0fead71a8775": Phase="Pending", Reason="", readiness=false. Elapsed: 5.160129ms
Sep 17 04:34:43.488: INFO: Pod "downwardapi-volume-98158ed1-5be6-4800-b9c1-0fead71a8775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008526538s
STEP: Saw pod success
Sep 17 04:34:43.488: INFO: Pod "downwardapi-volume-98158ed1-5be6-4800-b9c1-0fead71a8775" satisfied condition "Succeeded or Failed"
Sep 17 04:34:43.490: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-98158ed1-5be6-4800-b9c1-0fead71a8775 container client-container: <nil>
STEP: delete the pod
Sep 17 04:34:43.513: INFO: Waiting for pod downwardapi-volume-98158ed1-5be6-4800-b9c1-0fead71a8775 to disappear
Sep 17 04:34:43.518: INFO: Pod downwardapi-volume-98158ed1-5be6-4800-b9c1-0fead71a8775 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:43.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4957" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":314,"skipped":5862,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:34:43.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5621" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":315,"skipped":5892,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-6gzw
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 04:34:43.660: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6gzw" in namespace "subpath-914" to be "Succeeded or Failed"
Sep 17 04:34:43.664: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272588ms
Sep 17 04:34:45.669: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 2.009229612s
Sep 17 04:34:47.673: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 4.013149742s
Sep 17 04:34:49.677: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 6.016945232s
Sep 17 04:34:51.680: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 8.020001953s
Sep 17 04:34:53.685: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 10.024329372s
Sep 17 04:34:55.690: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 12.029322465s
Sep 17 04:34:57.695: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 14.035036161s
Sep 17 04:34:59.703: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 16.042612437s
Sep 17 04:35:01.711: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 18.050836902s
Sep 17 04:35:03.715: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Running", Reason="", readiness=true. Elapsed: 20.054803071s
Sep 17 04:35:05.722: INFO: Pod "pod-subpath-test-configmap-6gzw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.062260331s
STEP: Saw pod success
Sep 17 04:35:05.723: INFO: Pod "pod-subpath-test-configmap-6gzw" satisfied condition "Succeeded or Failed"
Sep 17 04:35:05.730: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-subpath-test-configmap-6gzw container test-container-subpath-configmap-6gzw: <nil>
STEP: delete the pod
Sep 17 04:35:05.764: INFO: Waiting for pod pod-subpath-test-configmap-6gzw to disappear
Sep 17 04:35:05.770: INFO: Pod pod-subpath-test-configmap-6gzw no longer exists
STEP: Deleting pod pod-subpath-test-configmap-6gzw
Sep 17 04:35:05.770: INFO: Deleting pod "pod-subpath-test-configmap-6gzw" in namespace "subpath-914"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":316,"skipped":5927,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 51 lines ...
• [SLOW TEST:21.163 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":317,"skipped":5936,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 04:35:26.946: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 17 04:35:26.979: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:35:29.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3195" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":318,"skipped":5939,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes control plane services is included in cluster-info  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Sep 17 04:35:29.811: INFO: stderr: ""
Sep 17 04:35:29.811: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://34.69.105.80\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:35:29.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1908" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":346,"completed":319,"skipped":5943,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 8 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:35:34.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6701" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":320,"skipped":6000,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":321,"skipped":6029,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 04:35:56.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19f98e59-d038-43af-a937-bb4497e00f87" in namespace "projected-9572" to be "Succeeded or Failed"
Sep 17 04:35:56.928: INFO: Pod "downwardapi-volume-19f98e59-d038-43af-a937-bb4497e00f87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.802573ms
Sep 17 04:35:58.935: INFO: Pod "downwardapi-volume-19f98e59-d038-43af-a937-bb4497e00f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010586142s
STEP: Saw pod success
Sep 17 04:35:58.935: INFO: Pod "downwardapi-volume-19f98e59-d038-43af-a937-bb4497e00f87" satisfied condition "Succeeded or Failed"
Sep 17 04:35:58.937: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod downwardapi-volume-19f98e59-d038-43af-a937-bb4497e00f87 container client-container: <nil>
STEP: delete the pod
Sep 17 04:35:58.952: INFO: Waiting for pod downwardapi-volume-19f98e59-d038-43af-a937-bb4497e00f87 to disappear
Sep 17 04:35:58.955: INFO: Pod downwardapi-volume-19f98e59-d038-43af-a937-bb4497e00f87 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:35:58.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9572" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":322,"skipped":6031,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-40a2a850-a070-4dd8-8101-94353d26b2e2
STEP: Creating a pod to test consume secrets
Sep 17 04:35:58.999: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c03eed0d-769b-449b-b631-a9da29964fec" in namespace "projected-7840" to be "Succeeded or Failed"
Sep 17 04:35:59.004: INFO: Pod "pod-projected-secrets-c03eed0d-769b-449b-b631-a9da29964fec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.65134ms
Sep 17 04:36:01.010: INFO: Pod "pod-projected-secrets-c03eed0d-769b-449b-b631-a9da29964fec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010756826s
STEP: Saw pod success
Sep 17 04:36:01.010: INFO: Pod "pod-projected-secrets-c03eed0d-769b-449b-b631-a9da29964fec" satisfied condition "Succeeded or Failed"
Sep 17 04:36:01.014: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-secrets-c03eed0d-769b-449b-b631-a9da29964fec container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 04:36:01.032: INFO: Waiting for pod pod-projected-secrets-c03eed0d-769b-449b-b631-a9da29964fec to disappear
Sep 17 04:36:01.037: INFO: Pod pod-projected-secrets-c03eed0d-769b-449b-b631-a9da29964fec no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:36:01.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7840" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":323,"skipped":6038,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 17 04:36:01.162: INFO: stderr: ""
Sep 17 04:36:01.162: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23+\", GitVersion:\"v1.23.0-alpha.2.69+2f10e6587c07ef\", GitCommit:\"2f10e6587c07ef94361b64c8a1f9918de07bf852\", GitTreeState:\"clean\", BuildDate:\"2021-09-17T00:19:00Z\", GoVersion:\"go1.17.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23+\", GitVersion:\"v1.23.0-alpha.2.69+2f10e6587c07ef\", GitCommit:\"2f10e6587c07ef94361b64c8a1f9918de07bf852\", GitTreeState:\"clean\", BuildDate:\"2021-09-17T00:19:00Z\", GoVersion:\"go1.17.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:36:01.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9573" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":346,"completed":324,"skipped":6042,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-95a92237-f846-4f77-ab6b-99b6579ed97c
STEP: Creating a pod to test consume configMaps
Sep 17 04:36:01.210: INFO: Waiting up to 5m0s for pod "pod-configmaps-d54314e1-339f-44b4-959c-f707e250595a" in namespace "configmap-419" to be "Succeeded or Failed"
Sep 17 04:36:01.217: INFO: Pod "pod-configmaps-d54314e1-339f-44b4-959c-f707e250595a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.662609ms
Sep 17 04:36:03.221: INFO: Pod "pod-configmaps-d54314e1-339f-44b4-959c-f707e250595a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011122712s
STEP: Saw pod success
Sep 17 04:36:03.221: INFO: Pod "pod-configmaps-d54314e1-339f-44b4-959c-f707e250595a" satisfied condition "Succeeded or Failed"
Sep 17 04:36:03.223: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-configmaps-d54314e1-339f-44b4-959c-f707e250595a container agnhost-container: <nil>
STEP: delete the pod
Sep 17 04:36:03.237: INFO: Waiting for pod pod-configmaps-d54314e1-339f-44b4-959c-f707e250595a to disappear
Sep 17 04:36:03.241: INFO: Pod pod-configmaps-d54314e1-339f-44b4-959c-f707e250595a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:36:03.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-419" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":325,"skipped":6056,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 2 lines ...
Sep 17 04:36:03.247: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 04:36:03.296: INFO: created pod
Sep 17 04:36:03.296: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2741" to be "Succeeded or Failed"
Sep 17 04:36:03.302: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.431112ms
Sep 17 04:36:05.305: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008810982s
Sep 17 04:36:07.310: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013686768s
STEP: Saw pod success
Sep 17 04:36:07.310: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Sep 17 04:36:37.311: INFO: polling logs
Sep 17 04:36:37.318: INFO: Pod logs: 
2021/09/17 04:36:04 OK: Got token
2021/09/17 04:36:04 validating with in-cluster discovery
2021/09/17 04:36:04 OK: got issuer https://kubernetes.default.svc.cluster.local
2021/09/17 04:36:04 Full, not-validated claims: 
... skipping 13 lines ...
• [SLOW TEST:34.083 seconds]
[sig-auth] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":326,"skipped":6094,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:36:37.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8005" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":327,"skipped":6096,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 6 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:36:39.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-8919" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":328,"skipped":6121,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:36:43.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4004" for this suite.
STEP: Destroying namespace "webhook-4004-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":329,"skipped":6123,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 31 lines ...
• [SLOW TEST:7.616 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":330,"skipped":6127,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events API
... skipping 12 lines ...
Sep 17 04:36:51.704: INFO: requesting DeleteCollection of events
STEP: check that the list of events matches the requested quantity
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:36:51.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-1543" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":331,"skipped":6147,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:7.180 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":332,"skipped":6154,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 04:36:58.902: INFO: >>> kubeConfig: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 04:37:00.947: INFO: Deleting pod "var-expansion-139fbea3-c3a7-4c1f-85f3-a02d13507e01" in namespace "var-expansion-2218"
Sep 17 04:37:00.953: INFO: Wait up to 5m0s for pod "var-expansion-139fbea3-c3a7-4c1f-85f3-a02d13507e01" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:37:02.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2218" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":333,"skipped":6220,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-ffeb5707-1662-4ed4-ac3c-856810e05745
STEP: Creating a pod to test consume secrets
Sep 17 04:37:03.016: INFO: Waiting up to 5m0s for pod "pod-secrets-02dfe18b-c420-4417-b8ac-248450f5fc16" in namespace "secrets-9577" to be "Succeeded or Failed"
Sep 17 04:37:03.019: INFO: Pod "pod-secrets-02dfe18b-c420-4417-b8ac-248450f5fc16": Phase="Pending", Reason="", readiness=false. Elapsed: 3.645784ms
Sep 17 04:37:05.023: INFO: Pod "pod-secrets-02dfe18b-c420-4417-b8ac-248450f5fc16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006812165s
STEP: Saw pod success
Sep 17 04:37:05.023: INFO: Pod "pod-secrets-02dfe18b-c420-4417-b8ac-248450f5fc16" satisfied condition "Succeeded or Failed"
Sep 17 04:37:05.025: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-secrets-02dfe18b-c420-4417-b8ac-248450f5fc16 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 04:37:05.050: INFO: Waiting for pod pod-secrets-02dfe18b-c420-4417-b8ac-248450f5fc16 to disappear
Sep 17 04:37:05.052: INFO: Pod pod-secrets-02dfe18b-c420-4417-b8ac-248450f5fc16 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:37:05.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9577" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":334,"skipped":6229,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should observe PodDisruptionBudget status updated [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 10 lines ...
STEP: Waiting for all pods to be running
Sep 17 04:37:07.172: INFO: running pods: 0 < 3
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:37:09.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-9933" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":335,"skipped":6241,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 34 lines ...
• [SLOW TEST:20.106 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":336,"skipped":6308,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 17 04:37:29.340: INFO: The status of Pod busybox-scheduling-15ece72b-ea4f-425e-b789-d4724925039f is Pending, waiting for it to be Running (with Ready = true)
Sep 17 04:37:31.343: INFO: The status of Pod busybox-scheduling-15ece72b-ea4f-425e-b789-d4724925039f is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:37:31.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7525" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":337,"skipped":6324,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] RuntimeClass 
   should support RuntimeClasses API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] RuntimeClass
... skipping 18 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-node] RuntimeClass
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:37:31.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-4476" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":346,"completed":338,"skipped":6351,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 27 lines ...
• [SLOW TEST:76.329 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":339,"skipped":6359,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
• [SLOW TEST:7.101 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":340,"skipped":6372,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 31 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":341,"skipped":6443,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-ct59
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 04:39:16.428: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ct59" in namespace "subpath-9963" to be "Succeeded or Failed"
Sep 17 04:39:16.435: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Pending", Reason="", readiness=false. Elapsed: 6.673154ms
Sep 17 04:39:18.439: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010911687s
Sep 17 04:39:20.444: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 4.016299243s
Sep 17 04:39:22.450: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 6.021440011s
Sep 17 04:39:24.455: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 8.027219836s
Sep 17 04:39:26.459: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 10.030845192s
... skipping 2 lines ...
Sep 17 04:39:32.474: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 16.045529763s
Sep 17 04:39:34.478: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 18.049984859s
Sep 17 04:39:36.482: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 20.053647326s
Sep 17 04:39:38.486: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Running", Reason="", readiness=true. Elapsed: 22.05766344s
Sep 17 04:39:40.490: INFO: Pod "pod-subpath-test-secret-ct59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.061779302s
STEP: Saw pod success
Sep 17 04:39:40.490: INFO: Pod "pod-subpath-test-secret-ct59" satisfied condition "Succeeded or Failed"
Sep 17 04:39:40.494: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-subpath-test-secret-ct59 container test-container-subpath-secret-ct59: <nil>
STEP: delete the pod
Sep 17 04:39:40.546: INFO: Waiting for pod pod-subpath-test-secret-ct59 to disappear
Sep 17 04:39:40.550: INFO: Pod pod-subpath-test-secret-ct59 no longer exists
STEP: Deleting pod pod-subpath-test-secret-ct59
Sep 17 04:39:40.550: INFO: Deleting pod "pod-subpath-test-secret-ct59" in namespace "subpath-9963"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":342,"skipped":6445,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-d873bd28-4693-4894-8650-98f636936394
STEP: Creating a pod to test consume configMaps
Sep 17 04:39:40.636: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46c2c26a-d66c-4a1b-b14c-a96be145a24d" in namespace "projected-7835" to be "Succeeded or Failed"
Sep 17 04:39:40.647: INFO: Pod "pod-projected-configmaps-46c2c26a-d66c-4a1b-b14c-a96be145a24d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.389213ms
Sep 17 04:39:42.651: INFO: Pod "pod-projected-configmaps-46c2c26a-d66c-4a1b-b14c-a96be145a24d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015054476s
STEP: Saw pod success
Sep 17 04:39:42.651: INFO: Pod "pod-projected-configmaps-46c2c26a-d66c-4a1b-b14c-a96be145a24d" satisfied condition "Succeeded or Failed"
Sep 17 04:39:42.653: INFO: Trying to get logs from node kt2-4d7c9b85-175c-minion-group-94gp pod pod-projected-configmaps-46c2c26a-d66c-4a1b-b14c-a96be145a24d container agnhost-container: <nil>
STEP: delete the pod
Sep 17 04:39:42.669: INFO: Waiting for pod pod-projected-configmaps-46c2c26a-d66c-4a1b-b14c-a96be145a24d to disappear
Sep 17 04:39:42.672: INFO: Pod pod-projected-configmaps-46c2c26a-d66c-4a1b-b14c-a96be145a24d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 04:39:42.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7835" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":343,"skipped":6451,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Sep 17 04:39:44.761: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:44.794: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:44.798: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:44.803: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:44.806: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:44.809: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:44.809: INFO: Lookups using dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local]

Sep 17 04:39:49.817: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.822: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.828: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.836: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.842: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.846: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.852: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.856: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:49.856: INFO: Lookups using dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local]

Sep 17 04:39:54.817: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.821: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.824: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.827: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.831: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.835: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.838: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.848: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:54.848: INFO: Lookups using dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local]

Sep 17 04:39:59.818: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.822: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.826: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.831: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.835: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.841: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.847: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.852: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:39:59.852: INFO: Lookups using dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local]

Sep 17 04:40:04.818: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.823: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.827: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.831: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.834: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.839: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.843: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.848: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:04.848: INFO: Lookups using dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local]

Sep 17 04:40:09.856: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.881: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.886: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.889: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.893: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.898: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.901: INFO: Unable to read jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.948: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local from pod dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de: the server could not find the requested resource (get pods dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de)
Sep 17 04:40:09.948: INFO: Lookups using dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local wheezy_udp@dns-test-service-2.dns-2633.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-2633.svc.cluster.local jessie_udp@dns-test-service-2.dns-2633.svc.cluster.local jessie_tcp@dns-test-service-2.dns-2633.svc.cluster.local]

Sep 17 04:40:14.848: INFO: DNS probes using dns-2633/dns-test-5a5714cf-927e-4c1b-a313-b990093cc6de succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 4 lines ...
• [SLOW TEST:32.266 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":344,"skipped":6461,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSep 17 04:40:14.946: INFO: Running AfterSuite actions on all nodes
Sep 17 04:40:14.946: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2
Sep 17 04:40:14.946: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Sep 17 04:40:14.946: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Sep 17 04:40:14.946: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Sep 17 04:40:14.946: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Sep 17 04:40:14.946: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Sep 17 04:40:14.946: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3
Sep 17 04:40:14.946: INFO: Running AfterSuite actions on node 1
Sep 17 04:40:14.946: INFO: Dumping logs locally to: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1
Sep 17 04:40:14.946: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

JUnit report was created: /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/junit_01.xml
{"msg":"Test Suite completed","total":346,"completed":344,"skipped":6505,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}


Summarizing 2 Failures:

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that NodeSelector is respected if not matching  [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run  [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323

Ran 346 of 6851 Specs in 7182.843 seconds
FAIL! -- 344 Passed | 2 Failed | 0 Pending | 6505 Skipped
--- FAIL: TestE2E (7184.87s)
FAIL

Ginkgo ran 1 suite in 1h59m44.978659333s
Test Suite Failed
F0917 04:40:15.003520   97107 ginkgo.go:205] failed to run ginkgo tester: exit status 1
I0917 04:40:15.017994    2890 down.go:29] GCE deployer starting Down()
I0917 04:40:15.018044    2890 common.go:204] checking locally built kubectl ...
I0917 04:40:15.018117    2890 down.go:43] About to run script at: /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh
I0917 04:40:15.018127    2890 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh 
Bringing down cluster using provider: gce
... calling verify-prereqs
... skipping 38 lines ...
Property "users.k8s-infra-e2e-boskos-119_kt2-4d7c9b85-175c-basic-auth" unset.
Property "contexts.k8s-infra-e2e-boskos-119_kt2-4d7c9b85-175c" unset.
Cleared config for k8s-infra-e2e-boskos-119_kt2-4d7c9b85-175c from /logs/artifacts/4d7c9b85-175c-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Done
I0917 04:46:22.535319    2890 down.go:53] about to delete nodeport firewall rule
I0917 04:46:22.535421    2890 local.go:42] ⚙️ gcloud compute firewall-rules delete --project k8s-infra-e2e-boskos-119 kt2-4d7c9b85-175c-minion-nodeports
ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-boskos-119/global/firewalls/kt2-4d7c9b85-175c-minion-nodeports' was not found

W0917 04:46:23.555340    2890 firewall.go:62] failed to delete nodeports firewall rules: might be deleted already?
I0917 04:46:23.555378    2890 down.go:59] releasing boskos project
I0917 04:46:23.580368    2890 boskos.go:83] Boskos heartbeat func received signal to close
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
14611a6678b8
... skipping 4 lines ...