This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2022-06-23 06:03
Elapsed2h12m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 328 lines ...
Trying to find master named 'kt2-d118eff5-f2b9-master'
Looking for address 'kt2-d118eff5-f2b9-master-ip'
Using master: kt2-d118eff5-f2b9-master (external IP: 35.202.0.82; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

..................Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-115_kt2-d118eff5-f2b9" set.
User "k8s-infra-e2e-boskos-115_kt2-d118eff5-f2b9" set.
Context "k8s-infra-e2e-boskos-115_kt2-d118eff5-f2b9" created.
Switched to context "k8s-infra-e2e-boskos-115_kt2-d118eff5-f2b9".
... skipping 27 lines ...
kt2-d118eff5-f2b9-minion-group-h59d   Ready                      <none>   36s   v1.25.0-alpha.1.99+0669ba386bde2e
kt2-d118eff5-f2b9-minion-group-jjkh   Ready                      <none>   37s   v1.25.0-alpha.1.99+0669ba386bde2e
kt2-d118eff5-f2b9-minion-group-qsw7   Ready                      <none>   36s   v1.25.0-alpha.1.99+0669ba386bde2e
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 40 lines ...

Specify --start=53622 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/cluster-logs'
Detecting nodes in the cluster
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Changing logfiles to be world-readable for download
... skipping 9 lines ...

Specify --start=103978 in the next get-serial-port-output invocation to get only the new output starting from here.
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=kt2-d118eff5-f2b9-minion-group
NODE_NAMES=kt2-d118eff5-f2b9-minion-group-h59d kt2-d118eff5-f2b9-minion-group-jjkh kt2-d118eff5-f2b9-minion-group-qsw7
Failures for kt2-d118eff5-f2b9-minion-group (if any):
I0623 06:30:45.217051    2928 dumplogs.go:121] About to run: [/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl cluster-info dump]
I0623 06:30:45.217087    2928 local.go:42] ⚙️ /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl cluster-info dump
I0623 06:30:46.110924    2928 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-ginkgo ; --focus-regex=\[Conformance\] ; --use-built-binaries
I0623 06:30:46.349696   95089 ginkgo.go:120] Using kubeconfig at /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
I0623 06:30:46.349806   95089 ginkgo.go:90] Running ginkgo test as /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/ginkgo [--nodes=1 /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/e2e.test -- --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --kubectl-path=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --ginkgo.flakeAttempts=1 --ginkgo.skip= --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791]
Jun 23 06:30:46.439: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0623 06:30:46.439497   95107 e2e.go:129] Starting e2e run "da8eb66e-ed18-4b75-82d4-bf7792ae3308" on Ginkgo node 1
{"msg":"Test Suite starting","total":357,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1655965846 - Will randomize all specs
Will run 357 of 7043 specs

Jun 23 06:30:48.014: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
... skipping 20 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-map-e9d46596-9886-41c0-9f5f-6a0b85a70b36
STEP: Creating a pod to test consume configMaps
Jun 23 06:30:48.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8" in namespace "configmap-3024" to be "Succeeded or Failed"
Jun 23 06:30:48.140: INFO: Pod "pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.990608ms
Jun 23 06:30:50.155: INFO: Pod "pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01946383s
Jun 23 06:30:52.152: INFO: Pod "pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016567692s
Jun 23 06:30:54.155: INFO: Pod "pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019415339s
Jun 23 06:30:56.156: INFO: Pod "pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.02034755s
STEP: Saw pod success
Jun 23 06:30:56.156: INFO: Pod "pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8" satisfied condition "Succeeded or Failed"
Jun 23 06:30:56.162: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:30:56.191: INFO: Waiting for pod pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8 to disappear
Jun 23 06:30:56.197: INFO: Pod pod-configmaps-73bd6cc0-cfb7-4366-932e-b4f2c923d3f8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:8.131 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":1,"skipped":0,"failed":0}
[sig-architecture] Conformance Tests 
  should have at least two untainted nodes [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-architecture] Conformance Tests
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 6 lines ...
STEP: Getting node addresses
Jun 23 06:30:56.266: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
[AfterEach] [sig-architecture] Conformance Tests
  test/e2e/framework/framework.go:187
Jun 23 06:30:56.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "conformance-tests-4533" for this suite.
•{"msg":"PASSED [sig-architecture] Conformance Tests should have at least two untainted nodes [Conformance]","total":357,"completed":2,"skipped":0,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:9.316 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":357,"completed":3,"skipped":4,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 17 lines ...
Jun 23 06:31:08.569: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl exec --namespace=svcaccounts-7638 pod-service-account-ad205a7e-edbd-475e-9697-2cfb8629abb8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
Jun 23 06:31:08.758: INFO: Got root ca configmap in namespace "svcaccounts-7638"
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
Jun 23 06:31:08.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7638" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":357,"completed":4,"skipped":15,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Watchers
... skipping 34 lines ...
• [SLOW TEST:20.093 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":357,"completed":5,"skipped":24,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 23 06:31:28.909: INFO: Waiting up to 5m0s for pod "pod-f9d453ae-10d3-4790-812b-5f09002b7264" in namespace "emptydir-5670" to be "Succeeded or Failed"
Jun 23 06:31:28.916: INFO: Pod "pod-f9d453ae-10d3-4790-812b-5f09002b7264": Phase="Pending", Reason="", readiness=false. Elapsed: 7.062087ms
Jun 23 06:31:30.922: INFO: Pod "pod-f9d453ae-10d3-4790-812b-5f09002b7264": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01209085s
Jun 23 06:31:32.922: INFO: Pod "pod-f9d453ae-10d3-4790-812b-5f09002b7264": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0123825s
STEP: Saw pod success
Jun 23 06:31:32.922: INFO: Pod "pod-f9d453ae-10d3-4790-812b-5f09002b7264" satisfied condition "Succeeded or Failed"
Jun 23 06:31:32.926: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-f9d453ae-10d3-4790-812b-5f09002b7264 container test-container: <nil>
STEP: delete the pod
Jun 23 06:31:32.951: INFO: Waiting for pod pod-f9d453ae-10d3-4790-812b-5f09002b7264 to disappear
Jun 23 06:31:32.956: INFO: Pod pod-f9d453ae-10d3-4790-812b-5f09002b7264 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 06:31:32.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5670" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":6,"skipped":66,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:187
Jun 23 06:31:37.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3062" for this suite.
STEP: Destroying namespace "webhook-3062-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":357,"completed":7,"skipped":77,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should delete a collection of events [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-instrumentation] Events API
... skipping 13 lines ...
Jun 23 06:31:37.372: INFO: requesting DeleteCollection of events
STEP: check that the list of events matches the requested quantity
[AfterEach] [sig-instrumentation] Events API
  test/e2e/framework/framework.go:187
Jun 23 06:31:37.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-250" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":357,"completed":8,"skipped":111,"failed":0}
SSSSSS
------------------------------
[sig-apps] CronJob 
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] CronJob
... skipping 20 lines ...
• [SLOW TEST:324.110 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":357,"completed":9,"skipped":117,"failed":0}
[sig-node] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Kubelet
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 14 lines ...
Jun 23 06:37:05.597: INFO: The phase of Pod busybox-readonly-fsaa30f9dd-6671-4344-ae36-00d01d66a614 is Running (Ready = true)
Jun 23 06:37:05.597: INFO: Pod "busybox-readonly-fsaa30f9dd-6671-4344-ae36-00d01d66a614" satisfied condition "running and ready"
[AfterEach] [sig-node] Kubelet
  test/e2e/framework/framework.go:187
Jun 23 06:37:05.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6939" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":10,"skipped":117,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 14 lines ...
[It] should be able to retrieve and filter logs  [Conformance]
  test/e2e/framework/framework.go:647
STEP: Waiting for log generator to start.
Jun 23 06:37:05.753: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Jun 23 06:37:05.753: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4249" to be "running and ready, or succeeded"
Jun 23 06:37:05.763: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 9.288116ms
Jun 23 06:37:05.763: INFO: Error evaluating pod condition running and ready, or succeeded: want pod 'logs-generator' on 'kt2-d118eff5-f2b9-minion-group-jjkh' to be 'Running' but was 'Pending'
Jun 23 06:37:07.767: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.014049615s
Jun 23 06:37:07.767: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Jun 23 06:37:07.768: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Jun 23 06:37:07.768: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-4249 logs logs-generator logs-generator'
Jun 23 06:37:07.862: INFO: stderr: ""
... skipping 35 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl logs
  test/e2e/kubectl/kubectl.go:1558
    should be able to retrieve and filter logs  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":357,"completed":11,"skipped":144,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 34 lines ...
• [SLOW TEST:6.946 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":357,"completed":12,"skipped":158,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Kubelet
... skipping 14 lines ...
Jun 23 06:37:21.119: INFO: The phase of Pod busybox-scheduling-2c42710b-7e8c-46e6-83f8-d24a73dcb9e1 is Running (Ready = true)
Jun 23 06:37:21.119: INFO: Pod "busybox-scheduling-2c42710b-7e8c-46e6-83f8-d24a73dcb9e1" satisfied condition "running and ready"
[AfterEach] [sig-node] Kubelet
  test/e2e/framework/framework.go:187
Jun 23 06:37:21.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9896" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":357,"completed":13,"skipped":183,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Watchers
... skipping 18 lines ...
Jun 23 06:37:21.239: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4178  c2d87302-7ff8-46f0-a3bb-b30dd0281e9f 2142 0 2022-06-23 06:37:21 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-23 06:37:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 23 06:37:21.239: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-4178  c2d87302-7ff8-46f0-a3bb-b30dd0281e9f 2143 0 2022-06-23 06:37:21 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-06-23 06:37:21 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:187
Jun 23 06:37:21.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4178" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":357,"completed":14,"skipped":190,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:37:21.312: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215" in namespace "downward-api-9017" to be "Succeeded or Failed"
Jun 23 06:37:21.328: INFO: Pod "downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215": Phase="Pending", Reason="", readiness=false. Elapsed: 16.003899ms
Jun 23 06:37:23.339: INFO: Pod "downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027036792s
Jun 23 06:37:25.334: INFO: Pod "downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022813805s
STEP: Saw pod success
Jun 23 06:37:25.334: INFO: Pod "downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215" satisfied condition "Succeeded or Failed"
Jun 23 06:37:25.340: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215 container client-container: <nil>
STEP: delete the pod
Jun 23 06:37:25.374: INFO: Waiting for pod downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215 to disappear
Jun 23 06:37:25.383: INFO: Pod downwardapi-volume-5ddfeb6f-e2ad-4e6c-a7fd-854ffa728215 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 06:37:25.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9017" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":357,"completed":15,"skipped":193,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:37:25.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e" in namespace "downward-api-710" to be "Succeeded or Failed"
Jun 23 06:37:25.454: INFO: Pod "downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.865262ms
Jun 23 06:37:27.459: INFO: Pod "downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009082706s
Jun 23 06:37:29.614: INFO: Pod "downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164667885s
STEP: Saw pod success
Jun 23 06:37:29.614: INFO: Pod "downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e" satisfied condition "Succeeded or Failed"
Jun 23 06:37:29.731: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e container client-container: <nil>
STEP: delete the pod
Jun 23 06:37:30.223: INFO: Waiting for pod downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e to disappear
Jun 23 06:37:30.320: INFO: Pod downwardapi-volume-bc7136ef-ddce-451b-9402-12a13335ec6e no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:5.242 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":16,"skipped":197,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Job
... skipping 16 lines ...
• [SLOW TEST:10.083 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should create pods for an Indexed job with completion indexes and specified hostname [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should create pods for an Indexed job with completion indexes and specified hostname [Conformance]","total":357,"completed":17,"skipped":214,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Subpath
... skipping 7 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with downward pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-downwardapi-25l4
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 06:37:40.760: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-25l4" in namespace "subpath-5119" to be "Succeeded or Failed"
Jun 23 06:37:40.766: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.567589ms
Jun 23 06:37:42.771: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 2.010462751s
Jun 23 06:37:44.771: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 4.010052385s
Jun 23 06:37:46.771: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 6.010466897s
Jun 23 06:37:48.773: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 8.012999181s
Jun 23 06:37:50.776: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 10.015508111s
... skipping 3 lines ...
Jun 23 06:37:58.772: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 18.01125359s
Jun 23 06:38:00.770: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 20.009758265s
Jun 23 06:38:02.773: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=true. Elapsed: 22.012737122s
Jun 23 06:38:04.771: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Running", Reason="", readiness=false. Elapsed: 24.010457352s
Jun 23 06:38:06.769: INFO: Pod "pod-subpath-test-downwardapi-25l4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.008979216s
STEP: Saw pod success
Jun 23 06:38:06.769: INFO: Pod "pod-subpath-test-downwardapi-25l4" satisfied condition "Succeeded or Failed"
Jun 23 06:38:06.772: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-subpath-test-downwardapi-25l4 container test-container-subpath-downwardapi-25l4: <nil>
STEP: delete the pod
Jun 23 06:38:06.796: INFO: Waiting for pod pod-subpath-test-downwardapi-25l4 to disappear
Jun 23 06:38:06.800: INFO: Pod pod-subpath-test-downwardapi-25l4 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-25l4
Jun 23 06:38:06.800: INFO: Deleting pod "pod-subpath-test-downwardapi-25l4" in namespace "subpath-5119"
... skipping 7 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with downward pod [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Conformance]","total":357,"completed":18,"skipped":234,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 28 lines ...
Jun 23 06:38:18.920: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:18.926: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:18.932: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:18.938: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:18.944: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:18.951: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:18.951: INFO: Lookups using dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local]

Jun 23 06:38:23.959: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:23.965: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:23.971: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:23.977: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:23.983: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:23.990: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:23.996: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:24.002: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:24.002: INFO: Lookups using dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local]

Jun 23 06:38:28.959: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:28.965: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:29.001: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:29.013: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:29.018: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:29.024: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:29.031: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:29.037: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:29.037: INFO: Lookups using dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local]

Jun 23 06:38:33.960: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:33.967: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:33.973: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:33.979: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:33.985: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:33.992: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:33.998: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:34.005: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local from pod dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55: the server could not find the requested resource (get pods dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55)
Jun 23 06:38:34.005: INFO: Lookups using dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7653.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7653.svc.cluster.local jessie_udp@dns-test-service-2.dns-7653.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7653.svc.cluster.local]

Jun 23 06:38:39.005: INFO: DNS probes using dns-7653/dns-test-bf80b1b2-ab6c-4548-a24c-ab80611f5b55 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 4 lines ...
• [SLOW TEST:32.306 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":357,"completed":19,"skipped":239,"failed":0}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Subpath
... skipping 7 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-configmap-mkh8
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 06:38:39.191: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mkh8" in namespace "subpath-1051" to be "Succeeded or Failed"
Jun 23 06:38:39.197: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.786065ms
Jun 23 06:38:41.203: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=true. Elapsed: 2.01162185s
Jun 23 06:38:43.201: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=true. Elapsed: 4.010281702s
Jun 23 06:38:45.203: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=true. Elapsed: 6.011597691s
Jun 23 06:38:47.201: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=true. Elapsed: 8.010389297s
I0623 06:38:48.580084    2928 boskos.go:86] Sending heartbeat to Boskos
... skipping 3 lines ...
Jun 23 06:38:55.202: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=true. Elapsed: 16.010773732s
Jun 23 06:38:57.204: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=true. Elapsed: 18.012788351s
Jun 23 06:38:59.203: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=true. Elapsed: 20.011610106s
Jun 23 06:39:01.218: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Running", Reason="", readiness=false. Elapsed: 22.027482668s
Jun 23 06:39:03.202: INFO: Pod "pod-subpath-test-configmap-mkh8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.011049739s
STEP: Saw pod success
Jun 23 06:39:03.202: INFO: Pod "pod-subpath-test-configmap-mkh8" satisfied condition "Succeeded or Failed"
Jun 23 06:39:03.205: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-subpath-test-configmap-mkh8 container test-container-subpath-configmap-mkh8: <nil>
STEP: delete the pod
Jun 23 06:39:03.249: INFO: Waiting for pod pod-subpath-test-configmap-mkh8 to disappear
Jun 23 06:39:03.259: INFO: Pod pod-subpath-test-configmap-mkh8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-mkh8
Jun 23 06:39:03.259: INFO: Deleting pod "pod-subpath-test-configmap-mkh8" in namespace "subpath-1051"
... skipping 7 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with configmap pod with mountPath of existing file [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Conformance]","total":357,"completed":20,"skipped":243,"failed":0}
SSS
------------------------------
[sig-node] Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Containers
... skipping 10 lines ...
Jun 23 06:39:05.334: INFO: Pod "client-containers-edc33863-0264-4e8b-ac96-d5c2a1c8c219": Phase="Running", Reason="", readiness=true. Elapsed: 2.010102362s
Jun 23 06:39:05.334: INFO: Pod "client-containers-edc33863-0264-4e8b-ac96-d5c2a1c8c219" satisfied condition "running"
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
Jun 23 06:39:05.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8110" for this suite.
•{"msg":"PASSED [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":357,"completed":21,"skipped":246,"failed":0}
SSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Job
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 06:39:05.359: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:187
Jun 23 06:39:17.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-8360" for this suite.

• [SLOW TEST:12.138 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":357,"completed":22,"skipped":249,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces 
  should list and delete a collection of PodDisruptionBudgets [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] DisruptionController
... skipping 26 lines ...
Jun 23 06:39:19.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2-2226" for this suite.
[AfterEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:187
Jun 23 06:39:19.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-3596" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":357,"completed":23,"skipped":275,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 23 06:39:19.713: INFO: Waiting up to 5m0s for pod "pod-5d6917b9-3531-4b48-8860-d16a8cb74618" in namespace "emptydir-5818" to be "Succeeded or Failed"
Jun 23 06:39:19.723: INFO: Pod "pod-5d6917b9-3531-4b48-8860-d16a8cb74618": Phase="Pending", Reason="", readiness=false. Elapsed: 10.147179ms
Jun 23 06:39:21.728: INFO: Pod "pod-5d6917b9-3531-4b48-8860-d16a8cb74618": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015054699s
Jun 23 06:39:23.728: INFO: Pod "pod-5d6917b9-3531-4b48-8860-d16a8cb74618": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01528258s
STEP: Saw pod success
Jun 23 06:39:23.728: INFO: Pod "pod-5d6917b9-3531-4b48-8860-d16a8cb74618" satisfied condition "Succeeded or Failed"
Jun 23 06:39:23.732: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-5d6917b9-3531-4b48-8860-d16a8cb74618 container test-container: <nil>
STEP: delete the pod
Jun 23 06:39:23.754: INFO: Waiting for pod pod-5d6917b9-3531-4b48-8860-d16a8cb74618 to disappear
Jun 23 06:39:23.759: INFO: Pod pod-5d6917b9-3531-4b48-8860-d16a8cb74618 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 06:39:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5818" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":24,"skipped":282,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 33 lines ...
• [SLOW TEST:5.659 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":357,"completed":25,"skipped":284,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun 23 06:39:29.524: INFO: Waiting up to 5m0s for pod "pod-1aac171e-7abd-4de2-b636-33f611af3f8c" in namespace "emptydir-6506" to be "Succeeded or Failed"
Jun 23 06:39:29.534: INFO: Pod "pod-1aac171e-7abd-4de2-b636-33f611af3f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.642877ms
Jun 23 06:39:31.539: INFO: Pod "pod-1aac171e-7abd-4de2-b636-33f611af3f8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014821042s
Jun 23 06:39:33.540: INFO: Pod "pod-1aac171e-7abd-4de2-b636-33f611af3f8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016459183s
STEP: Saw pod success
Jun 23 06:39:33.540: INFO: Pod "pod-1aac171e-7abd-4de2-b636-33f611af3f8c" satisfied condition "Succeeded or Failed"
Jun 23 06:39:33.543: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-1aac171e-7abd-4de2-b636-33f611af3f8c container test-container: <nil>
STEP: delete the pod
Jun 23 06:39:33.566: INFO: Waiting for pod pod-1aac171e-7abd-4de2-b636-33f611af3f8c to disappear
Jun 23 06:39:33.570: INFO: Pod pod-1aac171e-7abd-4de2-b636-33f611af3f8c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 06:39:33.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6506" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":26,"skipped":294,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Security Context
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 06:39:33.635: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-cc2362a5-e1f5-465c-bbad-b3cd50e68abd" in namespace "security-context-test-2810" to be "Succeeded or Failed"
Jun 23 06:39:33.643: INFO: Pod "alpine-nnp-false-cc2362a5-e1f5-465c-bbad-b3cd50e68abd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.161214ms
Jun 23 06:39:35.647: INFO: Pod "alpine-nnp-false-cc2362a5-e1f5-465c-bbad-b3cd50e68abd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011264764s
Jun 23 06:39:37.647: INFO: Pod "alpine-nnp-false-cc2362a5-e1f5-465c-bbad-b3cd50e68abd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011827123s
Jun 23 06:39:39.648: INFO: Pod "alpine-nnp-false-cc2362a5-e1f5-465c-bbad-b3cd50e68abd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012056125s
Jun 23 06:39:39.648: INFO: Pod "alpine-nnp-false-cc2362a5-e1f5-465c-bbad-b3cd50e68abd" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 06:39:39.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2810" for this suite.

• [SLOW TEST:6.089 seconds]
[sig-node] Security Context
test/e2e/common/node/framework.go:23
  when creating containers with AllowPrivilegeEscalation
  test/e2e/common/node/security_context.go:298
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":27,"skipped":313,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0644 on node default medium
Jun 23 06:39:39.714: INFO: Waiting up to 5m0s for pod "pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb" in namespace "emptydir-910" to be "Succeeded or Failed"
Jun 23 06:39:39.725: INFO: Pod "pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb": Phase="Pending", Reason="", readiness=false. Elapsed: 10.888384ms
Jun 23 06:39:41.730: INFO: Pod "pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014987584s
Jun 23 06:39:43.730: INFO: Pod "pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015714716s
STEP: Saw pod success
Jun 23 06:39:43.730: INFO: Pod "pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb" satisfied condition "Succeeded or Failed"
Jun 23 06:39:43.733: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb container test-container: <nil>
STEP: delete the pod
Jun 23 06:39:43.756: INFO: Waiting for pod pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb to disappear
Jun 23 06:39:43.758: INFO: Pod pod-126e9cba-ef45-4c40-9ca5-ac0818cf22fb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 06:39:43.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-910" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":28,"skipped":330,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-map-e1e8b8c6-1bf1-4b55-b93b-dbe103bfba24
STEP: Creating a pod to test consume configMaps
Jun 23 06:39:43.840: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3" in namespace "projected-5622" to be "Succeeded or Failed"
Jun 23 06:39:43.851: INFO: Pod "pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.544395ms
Jun 23 06:39:45.856: INFO: Pod "pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016081726s
Jun 23 06:39:47.856: INFO: Pod "pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015988962s
STEP: Saw pod success
Jun 23 06:39:47.856: INFO: Pod "pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3" satisfied condition "Succeeded or Failed"
Jun 23 06:39:47.859: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:39:47.880: INFO: Waiting for pod pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3 to disappear
Jun 23 06:39:47.884: INFO: Pod pod-projected-configmaps-b21823e0-0411-4766-a053-c11c2571bab3 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 06:39:47.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5622" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":29,"skipped":340,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:39:47.936: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3" in namespace "projected-3505" to be "Succeeded or Failed"
Jun 23 06:39:47.944: INFO: Pod "downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.540566ms
Jun 23 06:39:49.949: INFO: Pod "downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012368067s
Jun 23 06:39:51.949: INFO: Pod "downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012399161s
STEP: Saw pod success
Jun 23 06:39:51.949: INFO: Pod "downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3" satisfied condition "Succeeded or Failed"
Jun 23 06:39:51.952: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3 container client-container: <nil>
STEP: delete the pod
Jun 23 06:39:51.975: INFO: Waiting for pod downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3 to disappear
Jun 23 06:39:51.979: INFO: Pod downwardapi-volume-cd1199ae-425a-4be8-a0f4-3bbac54307b3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 06:39:51.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3505" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":357,"completed":30,"skipped":362,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:39:52.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a" in namespace "projected-9834" to be "Succeeded or Failed"
Jun 23 06:39:52.038: INFO: Pod "downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.717219ms
Jun 23 06:39:54.042: INFO: Pod "downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01032437s
Jun 23 06:39:56.044: INFO: Pod "downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011523179s
STEP: Saw pod success
Jun 23 06:39:56.044: INFO: Pod "downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a" satisfied condition "Succeeded or Failed"
Jun 23 06:39:56.048: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a container client-container: <nil>
STEP: delete the pod
Jun 23 06:39:56.072: INFO: Waiting for pod downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a to disappear
Jun 23 06:39:56.078: INFO: Pod downwardapi-volume-b9a685b4-67b9-4862-af3b-e07539ab6f2a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 06:39:56.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9834" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":357,"completed":31,"skipped":368,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should get a host IP [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 16 lines ...
Jun 23 06:39:58.168: INFO: Pod "pod-hostip-33797058-db1e-48d6-b109-ca74a9850c1e" satisfied condition "running and ready"
Jun 23 06:39:58.174: INFO: Pod pod-hostip-33797058-db1e-48d6-b109-ca74a9850c1e has hostIP: 10.128.0.4
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
Jun 23 06:39:58.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8707" for this suite.
•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":357,"completed":32,"skipped":388,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 30 lines ...
Jun 23 06:40:14.413: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:14.419: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:14.448: INFO: Unable to read jessie_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:14.454: INFO: Unable to read jessie_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:14.460: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:14.466: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:14.493: INFO: Lookups using dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0 failed for: [wheezy_udp@dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_udp@dns-test-service.dns-6847.svc.cluster.local jessie_tcp@dns-test-service.dns-6847.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local]

Jun 23 06:40:19.503: INFO: Unable to read wheezy_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.510: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.517: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.523: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.559: INFO: Unable to read jessie_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.567: INFO: Unable to read jessie_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.572: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.582: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:19.612: INFO: Lookups using dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0 failed for: [wheezy_udp@dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_udp@dns-test-service.dns-6847.svc.cluster.local jessie_tcp@dns-test-service.dns-6847.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local]

Jun 23 06:40:24.502: INFO: Unable to read wheezy_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.508: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.514: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.520: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.560: INFO: Unable to read jessie_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.567: INFO: Unable to read jessie_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.573: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.580: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:24.606: INFO: Lookups using dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0 failed for: [wheezy_udp@dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_udp@dns-test-service.dns-6847.svc.cluster.local jessie_tcp@dns-test-service.dns-6847.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local]

Jun 23 06:40:29.542: INFO: Unable to read wheezy_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.564: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.572: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.582: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.623: INFO: Unable to read jessie_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.629: INFO: Unable to read jessie_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.635: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.642: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:29.668: INFO: Lookups using dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0 failed for: [wheezy_udp@dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_udp@dns-test-service.dns-6847.svc.cluster.local jessie_tcp@dns-test-service.dns-6847.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local]

Jun 23 06:40:34.501: INFO: Unable to read wheezy_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.507: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.513: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.519: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.548: INFO: Unable to read jessie_udp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.559: INFO: Unable to read jessie_tcp@dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.565: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.571: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local from pod dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0: the server could not find the requested resource (get pods dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0)
Jun 23 06:40:34.595: INFO: Lookups using dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0 failed for: [wheezy_udp@dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@dns-test-service.dns-6847.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_udp@dns-test-service.dns-6847.svc.cluster.local jessie_tcp@dns-test-service.dns-6847.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6847.svc.cluster.local]

Jun 23 06:40:39.656: INFO: DNS probes using dns-6847/dns-test-ddd17594-cdce-4119-af1d-decd5f81e5c0 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:41.840 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":357,"completed":33,"skipped":402,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicaSet
... skipping 23 lines ...
• [SLOW TEST:10.583 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":357,"completed":34,"skipped":423,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should list and delete a collection of DaemonSets [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 53 lines ...
• [SLOW TEST:8.243 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should list and delete a collection of DaemonSets [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":357,"completed":35,"skipped":454,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 06:40:58.851: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/common/node/init_container.go:164
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: creating the pod
Jun 23 06:40:58.882: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 06:41:03.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9941" for this suite.

• [SLOW TEST:5.147 seconds]
[sig-node] InitContainer [NodeConformance]
test/e2e/common/node/framework.go:23
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":357,"completed":36,"skipped":485,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 25 lines ...
Jun 23 06:41:06.207: INFO: ExecWithOptions: execute(POST https://35.202.0.82/api/v1/namespaces/dns-6227/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true)
Jun 23 06:41:06.343: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:187
Jun 23 06:41:06.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6227" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":357,"completed":37,"skipped":506,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 16 lines ...
• [SLOW TEST:12.494 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":357,"completed":38,"skipped":517,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 27 lines ...
• [SLOW TEST:16.207 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":357,"completed":39,"skipped":540,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-d43c5f52-5fd0-472d-9015-3ef5348732ac
STEP: Creating a pod to test consume configMaps
Jun 23 06:41:35.166: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd" in namespace "configmap-1195" to be "Succeeded or Failed"
Jun 23 06:41:35.213: INFO: Pod "pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd": Phase="Pending", Reason="", readiness=false. Elapsed: 46.408133ms
Jun 23 06:41:37.217: INFO: Pod "pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050830732s
Jun 23 06:41:39.217: INFO: Pod "pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050397377s
STEP: Saw pod success
Jun 23 06:41:39.217: INFO: Pod "pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd" satisfied condition "Succeeded or Failed"
Jun 23 06:41:39.220: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd container configmap-volume-test: <nil>
STEP: delete the pod
Jun 23 06:41:39.313: INFO: Waiting for pod pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd to disappear
Jun 23 06:41:39.319: INFO: Pod pod-configmaps-5f3264c0-10fc-4c42-9b29-5f0d3e359edd no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 06:41:39.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1195" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":357,"completed":40,"skipped":590,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] IngressClass API 
   should support creating IngressClass API operations [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] IngressClass API
... skipping 22 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  test/e2e/framework/framework.go:187
Jun 23 06:41:39.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-7269" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":357,"completed":41,"skipped":615,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name projected-secret-test-f6a6e3b8-f556-4ef4-87a2-b622ce3ebdc7
STEP: Creating a pod to test consume secrets
Jun 23 06:41:39.513: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933" in namespace "projected-7108" to be "Succeeded or Failed"
Jun 23 06:41:39.520: INFO: Pod "pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933": Phase="Pending", Reason="", readiness=false. Elapsed: 7.726507ms
Jun 23 06:41:41.525: INFO: Pod "pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012085611s
Jun 23 06:41:43.526: INFO: Pod "pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013185937s
STEP: Saw pod success
Jun 23 06:41:43.526: INFO: Pod "pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933" satisfied condition "Succeeded or Failed"
Jun 23 06:41:43.529: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 06:41:43.558: INFO: Waiting for pod pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933 to disappear
Jun 23 06:41:43.564: INFO: Pod pod-projected-secrets-dfee7a73-db68-428f-b7f9-c0580b69d933 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 06:41:43.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7108" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":357,"completed":42,"skipped":637,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 7 lines ...
  test/e2e/framework/framework.go:647
Jun 23 06:41:43.611: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 06:41:44.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7405" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":357,"completed":43,"skipped":644,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 23 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:187
Jun 23 06:41:46.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9727" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [Conformance]","total":357,"completed":44,"skipped":659,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
• [SLOW TEST:8.732 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":357,"completed":45,"skipped":702,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Subpath
... skipping 7 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with secret pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-secret-mvgj
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 06:41:55.436: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mvgj" in namespace "subpath-8152" to be "Succeeded or Failed"
Jun 23 06:41:55.446: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Pending", Reason="", readiness=false. Elapsed: 9.353236ms
Jun 23 06:41:57.450: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 2.014012239s
Jun 23 06:41:59.449: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 4.013038249s
Jun 23 06:42:01.450: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 6.01392983s
Jun 23 06:42:03.450: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 8.013855253s
Jun 23 06:42:05.450: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 10.013998277s
... skipping 2 lines ...
Jun 23 06:42:11.452: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 16.015898692s
Jun 23 06:42:13.450: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 18.014180523s
Jun 23 06:42:15.450: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=true. Elapsed: 20.013445727s
Jun 23 06:42:17.450: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Running", Reason="", readiness=false. Elapsed: 22.013997599s
Jun 23 06:42:19.451: INFO: Pod "pod-subpath-test-secret-mvgj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.014773423s
STEP: Saw pod success
Jun 23 06:42:19.451: INFO: Pod "pod-subpath-test-secret-mvgj" satisfied condition "Succeeded or Failed"
Jun 23 06:42:19.488: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-subpath-test-secret-mvgj container test-container-subpath-secret-mvgj: <nil>
STEP: delete the pod
Jun 23 06:42:19.536: INFO: Waiting for pod pod-subpath-test-secret-mvgj to disappear
Jun 23 06:42:19.544: INFO: Pod pod-subpath-test-secret-mvgj no longer exists
STEP: Deleting pod pod-subpath-test-secret-mvgj
Jun 23 06:42:19.544: INFO: Deleting pod "pod-subpath-test-secret-mvgj" in namespace "subpath-8152"
... skipping 7 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with secret pod [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Conformance]","total":357,"completed":46,"skipped":710,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 62 lines ...
• [SLOW TEST:12.407 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":357,"completed":47,"skipped":747,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 29 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl run pod
  test/e2e/kubectl/kubectl.go:1686
    should create a pod from an image when restart is Never  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":357,"completed":48,"skipped":750,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount projected service account token [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 3 lines ...
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test service account token: 
Jun 23 06:42:59.501: INFO: Waiting up to 5m0s for pod "test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56" in namespace "svcaccounts-8218" to be "Succeeded or Failed"
Jun 23 06:42:59.511: INFO: Pod "test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56": Phase="Pending", Reason="", readiness=false. Elapsed: 9.438491ms
Jun 23 06:43:01.515: INFO: Pod "test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013702708s
Jun 23 06:43:03.516: INFO: Pod "test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014253239s
Jun 23 06:43:05.518: INFO: Pod "test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016379888s
STEP: Saw pod success
Jun 23 06:43:05.518: INFO: Pod "test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56" satisfied condition "Succeeded or Failed"
Jun 23 06:43:05.522: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:43:05.553: INFO: Waiting for pod test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56 to disappear
Jun 23 06:43:05.558: INFO: Pod test-pod-297d3091-2af7-4a04-a807-ab7ac1acda56 no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.112 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  should mount projected service account token [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":357,"completed":49,"skipped":753,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:43:05.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0" in namespace "downward-api-5348" to be "Succeeded or Failed"
Jun 23 06:43:05.651: INFO: Pod "downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.02612ms
Jun 23 06:43:07.655: INFO: Pod "downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011726097s
Jun 23 06:43:09.657: INFO: Pod "downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013013591s
STEP: Saw pod success
Jun 23 06:43:09.657: INFO: Pod "downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0" satisfied condition "Succeeded or Failed"
Jun 23 06:43:09.660: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0 container client-container: <nil>
STEP: delete the pod
Jun 23 06:43:09.707: INFO: Waiting for pod downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0 to disappear
Jun 23 06:43:09.711: INFO: Pod downwardapi-volume-242abe9d-4de4-4eb1-ba1e-4c881e0717f0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 06:43:09.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5348" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":357,"completed":50,"skipped":758,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 16 lines ...
• [SLOW TEST:14.789 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":357,"completed":51,"skipped":780,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 33 lines ...
Jun 23 06:43:28.683: INFO: Waiting up to 5m0s for pod "execpod2nfjx" in namespace "services-167" to be "running"
Jun 23 06:43:28.690: INFO: Pod "execpod2nfjx": Phase="Pending", Reason="", readiness=false. Elapsed: 6.532357ms
Jun 23 06:43:30.695: INFO: Pod "execpod2nfjx": Phase="Running", Reason="", readiness=true. Elapsed: 2.011422432s
Jun 23 06:43:30.695: INFO: Pod "execpod2nfjx" satisfied condition "running"
Jun 23 06:43:31.696: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-167 exec execpod2nfjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jun 23 06:43:32.933: INFO: rc: 1
Jun 23 06:43:32.933: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-167 exec execpod2nfjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 06:43:33.933: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-167 exec execpod2nfjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Jun 23 06:43:35.171: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n"
Jun 23 06:43:35.171: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun 23 06:43:35.171: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-167 exec execpod2nfjx -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.142.218 80'
... skipping 21 lines ...
• [SLOW TEST:11.756 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":357,"completed":52,"skipped":796,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] ConfigMap
... skipping 12 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 06:43:36.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3457" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":357,"completed":53,"skipped":828,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 06:43:36.408: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 06:43:36.462: INFO: Waiting up to 2m0s for pod "var-expansion-0b3bbb48-2d49-4078-b604-30f9a8ca4ce9" in namespace "var-expansion-9524" to be "container 0 failed with reason CreateContainerConfigError"
Jun 23 06:43:36.477: INFO: Pod "var-expansion-0b3bbb48-2d49-4078-b604-30f9a8ca4ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.350196ms
Jun 23 06:43:38.482: INFO: Pod "var-expansion-0b3bbb48-2d49-4078-b604-30f9a8ca4ce9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020007708s
Jun 23 06:43:38.482: INFO: Pod "var-expansion-0b3bbb48-2d49-4078-b604-30f9a8ca4ce9" satisfied condition "container 0 failed with reason CreateContainerConfigError"
Jun 23 06:43:38.482: INFO: Deleting pod "var-expansion-0b3bbb48-2d49-4078-b604-30f9a8ca4ce9" in namespace "var-expansion-9524"
Jun 23 06:43:38.491: INFO: Wait up to 5m0s for pod "var-expansion-0b3bbb48-2d49-4078-b604-30f9a8ca4ce9" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
Jun 23 06:43:40.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9524" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":357,"completed":54,"skipped":842,"failed":0}
SSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 90 lines ...
• [SLOW TEST:19.685 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":357,"completed":55,"skipped":845,"failed":0}
SSSS
------------------------------
[sig-node] PodTemplates 
  should delete a collection of pod templates [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] PodTemplates
... skipping 15 lines ...
STEP: check that the list of pod templates matches the requested quantity
Jun 23 06:44:00.272: INFO: requesting list of pod templates to confirm quantity
[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:187
Jun 23 06:44:00.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-2238" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":357,"completed":56,"skipped":849,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 16 lines ...
STEP: Updating configmap configmap-test-upd-67979e78-cd89-45ca-8775-16d1fda06018
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 06:44:04.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1745" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":357,"completed":57,"skipped":860,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 8 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  test/e2e/framework/framework.go:187
Jun 23 06:44:04.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-6753" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":357,"completed":58,"skipped":873,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-map-900f696d-ad76-4def-a4fa-275ef4437520
STEP: Creating a pod to test consume secrets
Jun 23 06:44:04.527: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818" in namespace "projected-1325" to be "Succeeded or Failed"
Jun 23 06:44:04.533: INFO: Pod "pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818": Phase="Pending", Reason="", readiness=false. Elapsed: 5.891551ms
Jun 23 06:44:06.537: INFO: Pod "pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010182149s
Jun 23 06:44:08.539: INFO: Pod "pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011526612s
STEP: Saw pod success
Jun 23 06:44:08.539: INFO: Pod "pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818" satisfied condition "Succeeded or Failed"
Jun 23 06:44:08.542: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 06:44:08.591: INFO: Waiting for pod pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818 to disappear
Jun 23 06:44:08.596: INFO: Pod pod-projected-secrets-17a266ed-7ee8-40e8-8c49-f36c0087e818 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 06:44:08.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1325" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":357,"completed":59,"skipped":880,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 36 lines ...
• [SLOW TEST:7.323 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":357,"completed":60,"skipped":886,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should support CronJob API operations [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] CronJob
... skipping 24 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-apps] CronJob
  test/e2e/framework/framework.go:187
Jun 23 06:44:16.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-6645" for this suite.
•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":357,"completed":61,"skipped":990,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 19 lines ...
Jun 23 06:44:20.427: INFO: The phase of Pod pod-exec-websocket-36e75a0b-835a-49f7-a3df-d5928052bae9 is Running (Ready = true)
Jun 23 06:44:20.427: INFO: Pod "pod-exec-websocket-36e75a0b-835a-49f7-a3df-d5928052bae9" satisfied condition "running and ready"
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
Jun 23 06:44:20.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6031" for this suite.
•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":357,"completed":62,"skipped":1009,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:6.966 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":357,"completed":63,"skipped":1012,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] KubeletManagedEtcHosts
... skipping 82 lines ...
• [SLOW TEST:5.949 seconds]
[sig-node] KubeletManagedEtcHosts
test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":64,"skipped":1058,"failed":0}
[sig-apps] DisruptionController 
  should update/patch PodDisruptionBudget status [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 14 lines ...
STEP: Patching PodDisruptionBudget status
STEP: Waiting for the pdb to be processed
[AfterEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:187
Jun 23 06:44:35.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-9315" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":357,"completed":65,"skipped":1058,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 145 lines ...
• [SLOW TEST:42.961 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":357,"completed":66,"skipped":1072,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should observe PodDisruptionBudget status updated [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] DisruptionController
... skipping 21 lines ...
• [SLOW TEST:10.211 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":357,"completed":67,"skipped":1092,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 11 lines ...
  test/e2e/framework/framework.go:647
STEP: Creating a pod with one valid and two invalid sysctls
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 06:45:29.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-9708" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":357,"completed":68,"skipped":1125,"failed":0}
SSSSS
------------------------------
[sig-node] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Secrets
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 06:45:29.303: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name secret-emptykey-test-a9fb66db-93fc-4433-a9a0-8cfcf1864a9a
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
Jun 23 06:45:29.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3753" for this suite.
•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":357,"completed":69,"skipped":1130,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context 
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 06:45:29.407: INFO: Waiting up to 5m0s for pod "security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726" in namespace "security-context-3127" to be "Succeeded or Failed"
Jun 23 06:45:29.419: INFO: Pod "security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726": Phase="Pending", Reason="", readiness=false. Elapsed: 11.967119ms
Jun 23 06:45:31.426: INFO: Pod "security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018385199s
Jun 23 06:45:33.424: INFO: Pod "security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017232674s
Jun 23 06:45:35.428: INFO: Pod "security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02073268s
STEP: Saw pod success
Jun 23 06:45:35.428: INFO: Pod "security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726" satisfied condition "Succeeded or Failed"
Jun 23 06:45:35.435: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726 container test-container: <nil>
STEP: delete the pod
Jun 23 06:45:35.486: INFO: Waiting for pod security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726 to disappear
Jun 23 06:45:35.490: INFO: Pod security-context-a31a31d5-817a-47d8-ade0-0c2dd7962726 no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.155 seconds]
[sig-node] Security Context
test/e2e/node/framework.go:23
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":357,"completed":70,"skipped":1195,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  Replace and Patch tests [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicaSet
... skipping 25 lines ...
• [SLOW TEST:7.753 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":357,"completed":71,"skipped":1207,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 16 lines ...
Jun 23 06:45:46.392: INFO: Pod "execpod9bsmd": Phase="Pending", Reason="", readiness=false. Elapsed: 15.52187ms
Jun 23 06:45:48.398: INFO: Pod "execpod9bsmd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022222248s
Jun 23 06:45:50.401: INFO: Pod "execpod9bsmd": Phase="Running", Reason="", readiness=true. Elapsed: 4.024384026s
Jun 23 06:45:50.401: INFO: Pod "execpod9bsmd" satisfied condition "running"
Jun 23 06:45:51.406: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-6128 exec execpod9bsmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jun 23 06:45:52.641: INFO: rc: 1
Jun 23 06:45:52.641: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-6128 exec execpod9bsmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 nodeport-test 80
nc: connect to nodeport-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 06:45:53.641: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-6128 exec execpod9bsmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80'
Jun 23 06:45:53.795: INFO: stderr: "+ nc -v -t -w 2 nodeport-test 80\n+ echo hostName\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n"
Jun 23 06:45:53.796: INFO: stdout: "nodeport-test-pkspg"
Jun 23 06:45:53.796: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-6128 exec execpod9bsmd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.252.56 80'
... skipping 15 lines ...
• [SLOW TEST:11.028 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":357,"completed":72,"skipped":1242,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:45:54.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229" in namespace "projected-9007" to be "Succeeded or Failed"
Jun 23 06:45:54.337: INFO: Pod "downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229": Phase="Pending", Reason="", readiness=false. Elapsed: 6.537953ms
Jun 23 06:45:56.342: INFO: Pod "downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010749477s
Jun 23 06:45:58.342: INFO: Pod "downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011636049s
STEP: Saw pod success
Jun 23 06:45:58.342: INFO: Pod "downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229" satisfied condition "Succeeded or Failed"
Jun 23 06:45:58.345: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229 container client-container: <nil>
STEP: delete the pod
Jun 23 06:45:58.368: INFO: Waiting for pod downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229 to disappear
Jun 23 06:45:58.371: INFO: Pod downwardapi-volume-45ea96c4-498b-4f08-a7b4-c49c86106229 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 06:45:58.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9007" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":357,"completed":73,"skipped":1256,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 38 lines ...
• [SLOW TEST:10.745 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":357,"completed":74,"skipped":1275,"failed":0}
SSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:242.796 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":357,"completed":75,"skipped":1281,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
  test/e2e/framework/framework.go:187
Jun 23 06:50:15.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2287" for this suite.
STEP: Destroying namespace "webhook-2287-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":357,"completed":76,"skipped":1305,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  A set of valid responses are returned for both pod and service Proxy [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] version v1
... skipping 40 lines ...
Jun 23 06:50:17.965: INFO: Starting http.Client for https://35.202.0.82/api/v1/namespaces/proxy-3475/services/e2e-proxy-test-service/proxy?method=HEAD
Jun 23 06:50:17.977: INFO: http.Client request:HEAD StatusCode:301
[AfterEach] version v1
  test/e2e/framework/framework.go:187
Jun 23 06:50:17.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3475" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service Proxy [Conformance]","total":357,"completed":77,"skipped":1318,"failed":0}
S
------------------------------
[sig-node] RuntimeClass 
  should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] RuntimeClass
... skipping 8 lines ...
Jun 23 06:50:18.034: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-2734 to be scheduled
Jun 23 06:50:18.038: INFO: 1 pods are not scheduled: [runtimeclass-2734/test-runtimeclass-runtimeclass-2734-preconfigured-handler-wfs2h(7920079f-6957-4c7b-97e1-232681538b87)]
[AfterEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:187
Jun 23 06:50:20.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-2734" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]","total":357,"completed":78,"skipped":1319,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 141 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:322
    should scale a replication controller  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":357,"completed":79,"skipped":1334,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 31 lines ...
Jun 23 06:50:41.636: INFO: Unable to read jessie_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:41.642: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:41.649: INFO: Unable to read jessie_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:41.656: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:41.663: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:41.670: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:41.697: INFO: Lookups using dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5664 wheezy_tcp@dns-test-service.dns-5664 wheezy_udp@dns-test-service.dns-5664.svc wheezy_tcp@dns-test-service.dns-5664.svc wheezy_udp@_http._tcp.dns-test-service.dns-5664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5664 jessie_tcp@dns-test-service.dns-5664 jessie_udp@dns-test-service.dns-5664.svc jessie_tcp@dns-test-service.dns-5664.svc jessie_udp@_http._tcp.dns-test-service.dns-5664.svc jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc]

Jun 23 06:50:46.702: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.709: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.716: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.722: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.727: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
... skipping 5 lines ...
Jun 23 06:50:46.828: INFO: Unable to read jessie_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.835: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.841: INFO: Unable to read jessie_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.850: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.856: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.863: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:46.892: INFO: Lookups using dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5664 wheezy_tcp@dns-test-service.dns-5664 wheezy_udp@dns-test-service.dns-5664.svc wheezy_tcp@dns-test-service.dns-5664.svc wheezy_udp@_http._tcp.dns-test-service.dns-5664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5664 jessie_tcp@dns-test-service.dns-5664 jessie_udp@dns-test-service.dns-5664.svc jessie_tcp@dns-test-service.dns-5664.svc jessie_udp@_http._tcp.dns-test-service.dns-5664.svc jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc]

Jun 23 06:50:51.707: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.714: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.721: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.727: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.733: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
... skipping 5 lines ...
Jun 23 06:50:51.802: INFO: Unable to read jessie_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.809: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.816: INFO: Unable to read jessie_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.824: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.831: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.837: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:51.863: INFO: Lookups using dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5664 wheezy_tcp@dns-test-service.dns-5664 wheezy_udp@dns-test-service.dns-5664.svc wheezy_tcp@dns-test-service.dns-5664.svc wheezy_udp@_http._tcp.dns-test-service.dns-5664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5664 jessie_tcp@dns-test-service.dns-5664 jessie_udp@dns-test-service.dns-5664.svc jessie_tcp@dns-test-service.dns-5664.svc jessie_udp@_http._tcp.dns-test-service.dns-5664.svc jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc]

Jun 23 06:50:56.704: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.710: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.716: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.722: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.728: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
... skipping 5 lines ...
Jun 23 06:50:56.789: INFO: Unable to read jessie_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.795: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.801: INFO: Unable to read jessie_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.807: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.813: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.820: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:50:56.846: INFO: Lookups using dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5664 wheezy_tcp@dns-test-service.dns-5664 wheezy_udp@dns-test-service.dns-5664.svc wheezy_tcp@dns-test-service.dns-5664.svc wheezy_udp@_http._tcp.dns-test-service.dns-5664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5664 jessie_tcp@dns-test-service.dns-5664 jessie_udp@dns-test-service.dns-5664.svc jessie_tcp@dns-test-service.dns-5664.svc jessie_udp@_http._tcp.dns-test-service.dns-5664.svc jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc]

Jun 23 06:51:01.704: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.710: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.717: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.723: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.728: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
... skipping 5 lines ...
Jun 23 06:51:01.831: INFO: Unable to read jessie_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.842: INFO: Unable to read jessie_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.849: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.861: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:01.891: INFO: Lookups using dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5664 wheezy_tcp@dns-test-service.dns-5664 wheezy_udp@dns-test-service.dns-5664.svc wheezy_tcp@dns-test-service.dns-5664.svc wheezy_udp@_http._tcp.dns-test-service.dns-5664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5664 jessie_tcp@dns-test-service.dns-5664 jessie_udp@dns-test-service.dns-5664.svc jessie_tcp@dns-test-service.dns-5664.svc jessie_udp@_http._tcp.dns-test-service.dns-5664.svc jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc]

Jun 23 06:51:06.704: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.719: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.737: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.750: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.757: INFO: Unable to read wheezy_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
... skipping 5 lines ...
Jun 23 06:51:06.829: INFO: Unable to read jessie_udp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.836: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664 from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.842: INFO: Unable to read jessie_udp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.850: INFO: Unable to read jessie_tcp@dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.856: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.865: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc from pod dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0: the server could not find the requested resource (get pods dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0)
Jun 23 06:51:06.893: INFO: Lookups using dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5664 wheezy_tcp@dns-test-service.dns-5664 wheezy_udp@dns-test-service.dns-5664.svc wheezy_tcp@dns-test-service.dns-5664.svc wheezy_udp@_http._tcp.dns-test-service.dns-5664.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5664.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5664 jessie_tcp@dns-test-service.dns-5664 jessie_udp@dns-test-service.dns-5664.svc jessie_tcp@dns-test-service.dns-5664.svc jessie_udp@_http._tcp.dns-test-service.dns-5664.svc jessie_tcp@_http._tcp.dns-test-service.dns-5664.svc]

Jun 23 06:51:11.865: INFO: DNS probes using dns-5664/dns-test-bb2ad5b1-122e-4971-824b-c9df585ffba0 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:32.642 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":357,"completed":80,"skipped":1354,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 20 lines ...
• [SLOW TEST:11.250 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":357,"completed":81,"skipped":1354,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  test/e2e/framework/framework.go:187
Jun 23 06:51:27.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5404" for this suite.
STEP: Destroying namespace "webhook-5404-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":357,"completed":82,"skipped":1357,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:51:27.481: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226" in namespace "projected-1435" to be "Succeeded or Failed"
Jun 23 06:51:27.512: INFO: Pod "downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226": Phase="Pending", Reason="", readiness=false. Elapsed: 30.398885ms
Jun 23 06:51:29.520: INFO: Pod "downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039130465s
Jun 23 06:51:31.517: INFO: Pod "downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035356828s
STEP: Saw pod success
Jun 23 06:51:31.517: INFO: Pod "downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226" satisfied condition "Succeeded or Failed"
Jun 23 06:51:31.521: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226 container client-container: <nil>
STEP: delete the pod
Jun 23 06:51:31.609: INFO: Waiting for pod downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226 to disappear
Jun 23 06:51:31.613: INFO: Pod downwardapi-volume-9bd5b14b-f6e5-4501-a62c-a70248c9d226 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 06:51:31.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1435" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":83,"skipped":1392,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-57b34a6e-a77d-459f-919b-f4065e2ba0a5
STEP: Creating a pod to test consume configMaps
Jun 23 06:51:31.682: INFO: Waiting up to 5m0s for pod "pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9" in namespace "configmap-6000" to be "Succeeded or Failed"
Jun 23 06:51:31.688: INFO: Pod "pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.680408ms
Jun 23 06:51:33.693: INFO: Pod "pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011389541s
Jun 23 06:51:35.695: INFO: Pod "pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012813995s
STEP: Saw pod success
Jun 23 06:51:35.695: INFO: Pod "pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9" satisfied condition "Succeeded or Failed"
Jun 23 06:51:35.699: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:51:35.721: INFO: Waiting for pod pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9 to disappear
Jun 23 06:51:35.726: INFO: Pod pod-configmaps-25625f10-7827-4389-ac43-ec7a61d1d2f9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 06:51:35.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6000" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":357,"completed":84,"skipped":1421,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] version v1
... skipping 40 lines ...
Jun 23 06:51:37.936: INFO: Starting http.Client for https://35.202.0.82/api/v1/namespaces/proxy-8088/services/test-service/proxy/some/path/with/PUT
Jun 23 06:51:37.950: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT
[AfterEach] version v1
  test/e2e/framework/framework.go:187
Jun 23 06:51:37.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8088" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":357,"completed":85,"skipped":1477,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 23 lines ...
• [SLOW TEST:6.677 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":357,"completed":86,"skipped":1484,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-9aba18e5-78c4-40c1-bdf4-0c053f3b7326
STEP: Creating a pod to test consume configMaps
Jun 23 06:51:44.681: INFO: Waiting up to 5m0s for pod "pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c" in namespace "configmap-7403" to be "Succeeded or Failed"
Jun 23 06:51:44.694: INFO: Pod "pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.84916ms
Jun 23 06:51:46.698: INFO: Pod "pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016959452s
Jun 23 06:51:48.701: INFO: Pod "pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020366321s
STEP: Saw pod success
Jun 23 06:51:48.701: INFO: Pod "pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c" satisfied condition "Succeeded or Failed"
Jun 23 06:51:48.706: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:51:48.756: INFO: Waiting for pod pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c to disappear
Jun 23 06:51:48.760: INFO: Pod pod-configmaps-223193a0-51ae-496a-bd72-c532605e692c no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 06:51:48.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7403" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":87,"skipped":1504,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should test the lifecycle of a ReplicationController [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicationController
... skipping 27 lines ...
STEP: deleting ReplicationControllers by collection
STEP: waiting for ReplicationController to have a DELETED watchEvent
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:187
Jun 23 06:51:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6921" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":357,"completed":88,"skipped":1516,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:51:52.678: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570" in namespace "downward-api-1312" to be "Succeeded or Failed"
Jun 23 06:51:52.682: INFO: Pod "downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570": Phase="Pending", Reason="", readiness=false. Elapsed: 4.737408ms
Jun 23 06:51:54.687: INFO: Pod "downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009455851s
Jun 23 06:51:56.687: INFO: Pod "downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009609959s
STEP: Saw pod success
Jun 23 06:51:56.687: INFO: Pod "downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570" satisfied condition "Succeeded or Failed"
Jun 23 06:51:56.690: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570 container client-container: <nil>
STEP: delete the pod
Jun 23 06:51:56.709: INFO: Waiting for pod downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570 to disappear
Jun 23 06:51:56.715: INFO: Pod downwardapi-volume-7f0725e6-cbe1-4c82-981e-f5d73d579570 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 06:51:56.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1312" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":357,"completed":89,"skipped":1516,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] CSIStorageCapacity 
   should support CSIStorageCapacities API operations [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] CSIStorageCapacity
... skipping 21 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-storage] CSIStorageCapacity
  test/e2e/framework/framework.go:187
Jun 23 06:51:56.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "csistoragecapacity-9856" for this suite.
•{"msg":"PASSED [sig-storage] CSIStorageCapacity  should support CSIStorageCapacities API operations [Conformance]","total":357,"completed":90,"skipped":1565,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-17de5edf-1a29-46ac-aa61-2e176f88106c
STEP: Creating a pod to test consume configMaps
Jun 23 06:51:56.873: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da" in namespace "projected-6230" to be "Succeeded or Failed"
Jun 23 06:51:56.879: INFO: Pod "pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161907ms
Jun 23 06:51:58.884: INFO: Pod "pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010950749s
Jun 23 06:52:00.885: INFO: Pod "pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012230781s
STEP: Saw pod success
Jun 23 06:52:00.885: INFO: Pod "pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da" satisfied condition "Succeeded or Failed"
Jun 23 06:52:00.889: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da container projected-configmap-volume-test: <nil>
STEP: delete the pod
Jun 23 06:52:00.916: INFO: Waiting for pod pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da to disappear
Jun 23 06:52:00.923: INFO: Pod pod-projected-configmaps-103dd108-3073-4891-9396-9ac61bda40da no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 06:52:00.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6230" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":357,"completed":91,"skipped":1589,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 38 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:187
Jun 23 06:52:02.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3871" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":357,"completed":92,"skipped":1598,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 19 lines ...
STEP: Creating configMap with name cm-test-opt-create-adf5e863-2af2-47b8-a58f-75928ae095d3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 06:52:06.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5369" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":357,"completed":93,"skipped":1688,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:187
Jun 23 06:52:10.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1544" for this suite.
STEP: Destroying namespace "webhook-1544-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":357,"completed":94,"skipped":1695,"failed":0}
SSS
------------------------------
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Discovery
... skipping 105 lines ...
Jun 23 06:52:11.355: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}]
Jun 23 06:52:11.355: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1
[AfterEach] [sig-api-machinery] Discovery
  test/e2e/framework/framework.go:187
Jun 23 06:52:11.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-1380" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":357,"completed":95,"skipped":1698,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 23 lines ...
• [SLOW TEST:6.683 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":357,"completed":96,"skipped":1698,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Containers 
  should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Containers
... skipping 3 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override arguments
Jun 23 06:52:18.111: INFO: Waiting up to 5m0s for pod "client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91" in namespace "containers-9110" to be "Succeeded or Failed"
Jun 23 06:52:18.117: INFO: Pod "client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91": Phase="Pending", Reason="", readiness=false. Elapsed: 5.257549ms
Jun 23 06:52:20.122: INFO: Pod "client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010963319s
Jun 23 06:52:22.122: INFO: Pod "client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010466789s
STEP: Saw pod success
Jun 23 06:52:22.122: INFO: Pod "client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91" satisfied condition "Succeeded or Failed"
Jun 23 06:52:22.125: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:52:22.159: INFO: Waiting for pod client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91 to disappear
Jun 23 06:52:22.163: INFO: Pod client-containers-75694de5-64d5-4ebe-a752-da7a8dff6a91 no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
Jun 23 06:52:22.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9110" for this suite.
•{"msg":"PASSED [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]","total":357,"completed":97,"skipped":1713,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 21 lines ...
Jun 23 06:52:23.589: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jun 23 06:52:23.589: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-3350 describe pod agnhost-primary-5rwgb'
Jun 23 06:52:23.669: INFO: stderr: ""
Jun 23 06:52:23.669: INFO: stdout: "Name:         agnhost-primary-5rwgb\nNamespace:    kubectl-3350\nPriority:     0\nNode:         kt2-d118eff5-f2b9-minion-group-jjkh/10.128.0.4\nStart Time:   Thu, 23 Jun 2022 06:52:22 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.64.0.143\nIPs:\n  IP:           10.64.0.143\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://114f6c62a1a716f3abaffcf2537e7912bdcd131131344ac0971f6f4c13f8f96a\n    Image:          registry.k8s.io/e2e-test-images/agnhost:2.39\n    Image ID:       registry.k8s.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 23 Jun 2022 06:52:23 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-54hbs (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-54hbs:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  1s    default-scheduler  Successfully assigned kubectl-3350/agnhost-primary-5rwgb to kt2-d118eff5-f2b9-minion-group-jjkh\n  Normal  Pulled     0s    kubelet            Container image \"registry.k8s.io/e2e-test-images/agnhost:2.39\" already present on machine\n  Normal  Created    0s    kubelet            Created container agnhost-primary\n  Normal  Started    0s    kubelet            Started container agnhost-primary\n"
Jun 23 06:52:23.669: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-3350 describe rc agnhost-primary'
Jun 23 06:52:23.750: INFO: stderr: ""
Jun 23 06:52:23.750: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-3350\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        registry.k8s.io/e2e-test-images/agnhost:2.39\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  1s    replication-controller  Created pod: agnhost-primary-5rwgb\n"
Jun 23 06:52:23.750: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-3350 describe service agnhost-primary'
Jun 23 06:52:23.822: INFO: stderr: ""
Jun 23 06:52:23.823: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-3350\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.0.149.54\nIPs:               10.0.149.54\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.0.143:6379\nSession Affinity:  None\nEvents:            <none>\n"
Jun 23 06:52:23.827: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-3350 describe node kt2-d118eff5-f2b9-master'
Jun 23 06:52:23.947: INFO: stderr: ""
Jun 23 06:52:23.947: INFO: stdout: "Name:               kt2-d118eff5-f2b9-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-central1\n                    failure-domain.beta.kubernetes.io/zone=us-central1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kt2-d118eff5-f2b9-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-central1\n                    topology.kubernetes.io/zone=us-central1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 23 Jun 2022 06:28:42 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  kt2-d118eff5-f2b9-master\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 23 Jun 2022 06:52:23 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 23 Jun 2022 06:28:56 +0000   Thu, 23 Jun 2022 06:28:56 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Thu, 23 Jun 2022 06:50:51 +0000   Thu, 23 Jun 2022 06:28:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 23 Jun 2022 06:50:51 +0000   Thu, 23 Jun 2022 06:28:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 23 Jun 2022 06:50:51 +0000   Thu, 23 Jun 2022 06:28:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 23 Jun 2022 06:50:51 +0000   Thu, 23 Jun 2022 06:29:02 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.128.0.2\n  ExternalIP:   35.202.0.82\n  InternalDNS:  kt2-d118eff5-f2b9-master.c.k8s-infra-e2e-boskos-115.internal\n  Hostname:     kt2-d118eff5-f2b9-master.c.k8s-infra-e2e-boskos-115.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3773744Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3517744Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 892399cdc6f79b54b55dcc56ee4fbaee\n  System UUID:                892399cd-c6f7-9b54-b55d-cc56ee4fbaee\n  Boot ID:                    54218c09-5a1c-41a1-9ea6-5b6c3db6541e\n  Kernel Version:             5.4.129+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.6\n  Kubelet Version:            v1.25.0-alpha.1.99+0669ba386bde2e\n  Kube-Proxy Version:         v1.25.0-alpha.1.99+0669ba386bde2e\nPodCIDR:                      10.64.3.0/24\nPodCIDRs:                     10.64.3.0/24\nProviderID:                   gce://k8s-infra-e2e-boskos-115/us-central1-b/kt2-d118eff5-f2b9-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-server-events-kt2-d118eff5-f2b9-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         22m\n  kube-system                 etcd-server-kt2-d118eff5-f2b9-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         22m\n  kube-system                 fluentd-gcp-v3.2.0-g2s8p                            100m (10%)    1 (100%)    200Mi (5%)       500Mi (14%)    22m\n  kube-system                 konnectivity-server-kt2-d118eff5-f2b9-master        25m (2%)      0 (0%)      0 (0%)           0 (0%)         21m\n  kube-system                 kube-addon-manager-kt2-d118eff5-f2b9-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         22m\n  kube-system                 kube-apiserver-kt2-d118eff5-f2b9-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         21m\n  kube-system                 kube-controller-manager-kt2-d118eff5-f2b9-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         22m\n  kube-system                 kube-scheduler-kt2-d118eff5-f2b9-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         22m\n  kube-system                 l7-lb-controller-kt2-d118eff5-f2b9-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         22m\n  kube-system                 metadata-proxy-v0.1-jvt9z                           32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      23m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests     Limits\n  --------                   --------     ------\n  cpu                        997m (99%)   1032m (103%)\n  memory                     345Mi (10%)  545Mi (15%)\n  ephemeral-storage          0 (0%)       0 (0%)\n  hugepages-2Mi              0 (0%)       0 (0%)\n  attachable-volumes-gce-pd  0            0\nEvents:\n  Type    Reason          Age   From             Message\n  ----    ------          ----  ----             -------\n  Normal  RegisteredNode  23m   node-controller  Node kt2-d118eff5-f2b9-master event: Registered Node kt2-d118eff5-f2b9-master in Controller\n"
Jun 23 06:52:23.948: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-3350 describe namespace kubectl-3350'
Jun 23 06:52:24.017: INFO: stderr: ""
Jun 23 06:52:24.017: INFO: stdout: "Name:         kubectl-3350\nLabels:       e2e-framework=kubectl\n              e2e-run=da8eb66e-ed18-4b75-82d4-bf7792ae3308\n              kubernetes.io/metadata.name=kubectl-3350\n              pod-security.kubernetes.io/enforce=baseline\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 06:52:24.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3350" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":357,"completed":98,"skipped":1740,"failed":0}
SSSS
------------------------------
[sig-instrumentation] Events 
  should delete a collection of events [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-instrumentation] Events
... skipping 15 lines ...
STEP: check that the list of events matches the requested quantity
Jun 23 06:52:24.091: INFO: requesting list of events to confirm quantity
[AfterEach] [sig-instrumentation] Events
  test/e2e/framework/framework.go:187
Jun 23 06:52:24.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6123" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":357,"completed":99,"skipped":1744,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should delete a collection of services [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 16 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:187
Jun 23 06:52:24.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9233" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762
•{"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":357,"completed":100,"skipped":1780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:52:24.338: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b" in namespace "downward-api-2876" to be "Succeeded or Failed"
Jun 23 06:52:24.344: INFO: Pod "downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.75661ms
Jun 23 06:52:26.348: INFO: Pod "downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010740313s
Jun 23 06:52:28.347: INFO: Pod "downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00966376s
STEP: Saw pod success
Jun 23 06:52:28.347: INFO: Pod "downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b" satisfied condition "Succeeded or Failed"
Jun 23 06:52:28.351: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b container client-container: <nil>
STEP: delete the pod
Jun 23 06:52:28.372: INFO: Waiting for pod downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b to disappear
Jun 23 06:52:28.378: INFO: Pod downwardapi-volume-d6fdca80-7706-4c67-a099-19ef6e267a1b no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 06:52:28.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2876" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":101,"skipped":1849,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 06:52:28.474: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49" in namespace "projected-7845" to be "Succeeded or Failed"
Jun 23 06:52:28.483: INFO: Pod "downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49": Phase="Pending", Reason="", readiness=false. Elapsed: 9.640753ms
Jun 23 06:52:30.488: INFO: Pod "downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014584321s
Jun 23 06:52:32.489: INFO: Pod "downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015737793s
Jun 23 06:52:34.537: INFO: Pod "downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063054635s
STEP: Saw pod success
Jun 23 06:52:34.537: INFO: Pod "downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49" satisfied condition "Succeeded or Failed"
Jun 23 06:52:34.540: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49 container client-container: <nil>
STEP: delete the pod
Jun 23 06:52:34.569: INFO: Waiting for pod downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49 to disappear
Jun 23 06:52:34.572: INFO: Pod downwardapi-volume-ac5aa4ff-589e-4651-b661-dbf7abfaeb49 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.199 seconds]
[sig-storage] Projected downwardAPI
test/e2e/common/storage/framework.go:23
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":357,"completed":102,"skipped":1863,"failed":0}
SSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 39 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":357,"completed":103,"skipped":1866,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 20 lines ...
• [SLOW TEST:28.097 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":357,"completed":104,"skipped":1876,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods Extended Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods Extended
... skipping 11 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [sig-node] Pods Extended
  test/e2e/framework/framework.go:187
Jun 23 06:53:10.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3092" for this suite.
•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":357,"completed":105,"skipped":1894,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun 23 06:53:10.987: INFO: Asynchronously running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-7983 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 06:53:11.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7983" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":357,"completed":106,"skipped":1989,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] EndpointSlice
... skipping 8 lines ...
[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-network] EndpointSlice
  test/e2e/framework/framework.go:187
Jun 23 06:53:13.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-9088" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":357,"completed":107,"skipped":2021,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: creating secret secrets-6258/secret-test-65aa7b5e-7b93-4a67-b2f3-89372ff63c9e
STEP: Creating a pod to test consume secrets
Jun 23 06:53:13.297: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b" in namespace "secrets-6258" to be "Succeeded or Failed"
Jun 23 06:53:13.304: INFO: Pod "pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.247922ms
Jun 23 06:53:15.309: INFO: Pod "pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011944648s
Jun 23 06:53:17.310: INFO: Pod "pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012963958s
STEP: Saw pod success
Jun 23 06:53:17.310: INFO: Pod "pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b" satisfied condition "Succeeded or Failed"
Jun 23 06:53:17.315: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b container env-test: <nil>
STEP: delete the pod
Jun 23 06:53:17.349: INFO: Waiting for pod pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b to disappear
Jun 23 06:53:17.355: INFO: Pod pod-configmaps-fb9fd6bb-ee62-4f76-9248-2138c51a446b no longer exists
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
Jun 23 06:53:17.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6258" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":357,"completed":108,"skipped":2033,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 20 lines ...
• [SLOW TEST:11.120 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":357,"completed":109,"skipped":2064,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 38 lines ...
• [SLOW TEST:12.518 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":357,"completed":110,"skipped":2069,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-2c9c2b65-9231-4f20-a97e-0ef72d175418
STEP: Creating a pod to test consume configMaps
Jun 23 06:53:41.068: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8" in namespace "projected-2657" to be "Succeeded or Failed"
Jun 23 06:53:41.083: INFO: Pod "pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.266139ms
Jun 23 06:53:43.088: INFO: Pod "pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020150495s
Jun 23 06:53:45.088: INFO: Pod "pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020627453s
STEP: Saw pod success
Jun 23 06:53:45.088: INFO: Pod "pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8" satisfied condition "Succeeded or Failed"
Jun 23 06:53:45.093: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:53:45.138: INFO: Waiting for pod pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8 to disappear
Jun 23 06:53:45.154: INFO: Pod pod-projected-configmaps-970455d5-876e-4e2a-916c-bb60d1b565c8 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 06:53:45.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2657" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":357,"completed":111,"skipped":2114,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] RuntimeClass 
  should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] RuntimeClass
... skipping 8 lines ...
Jun 23 06:53:45.238: INFO: Waiting up to 1m20s for at least 1 pods in namespace runtimeclass-4395 to be scheduled
Jun 23 06:53:45.247: INFO: 1 pods are not scheduled: [runtimeclass-4395/test-runtimeclass-runtimeclass-4395-preconfigured-handler-xxld7(9feee54d-7c2b-49b3-aac8-69a8323fcc7c)]
[AfterEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:187
Jun 23 06:53:47.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-4395" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]","total":357,"completed":112,"skipped":2138,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 57 lines ...
• [SLOW TEST:9.396 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":357,"completed":113,"skipped":2186,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context 
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Jun 23 06:53:56.770: INFO: Waiting up to 5m0s for pod "security-context-4502f2a4-5a55-4118-8caa-effc1b24791f" in namespace "security-context-9265" to be "Succeeded or Failed"
Jun 23 06:53:56.789: INFO: Pod "security-context-4502f2a4-5a55-4118-8caa-effc1b24791f": Phase="Pending", Reason="", readiness=false. Elapsed: 19.131067ms
Jun 23 06:53:58.793: INFO: Pod "security-context-4502f2a4-5a55-4118-8caa-effc1b24791f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023302334s
Jun 23 06:54:00.795: INFO: Pod "security-context-4502f2a4-5a55-4118-8caa-effc1b24791f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024921974s
STEP: Saw pod success
Jun 23 06:54:00.795: INFO: Pod "security-context-4502f2a4-5a55-4118-8caa-effc1b24791f" satisfied condition "Succeeded or Failed"
Jun 23 06:54:00.799: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod security-context-4502f2a4-5a55-4118-8caa-effc1b24791f container test-container: <nil>
STEP: delete the pod
Jun 23 06:54:00.824: INFO: Waiting for pod security-context-4502f2a4-5a55-4118-8caa-effc1b24791f to disappear
Jun 23 06:54:00.835: INFO: Pod security-context-4502f2a4-5a55-4118-8caa-effc1b24791f no longer exists
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 06:54:00.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9265" for this suite.
•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":357,"completed":114,"skipped":2234,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 16 lines ...
Jun 23 06:54:04.160: INFO: Waiting up to 5m0s for pod "execpod-affinitynzq8l" in namespace "services-7546" to be "running"
Jun 23 06:54:04.213: INFO: Pod "execpod-affinitynzq8l": Phase="Pending", Reason="", readiness=false. Elapsed: 53.017373ms
Jun 23 06:54:06.222: INFO: Pod "execpod-affinitynzq8l": Phase="Running", Reason="", readiness=true. Elapsed: 2.061964758s
Jun 23 06:54:06.222: INFO: Pod "execpod-affinitynzq8l" satisfied condition "running"
Jun 23 06:54:07.232: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-7546 exec execpod-affinitynzq8l -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
Jun 23 06:54:08.449: INFO: rc: 1
Jun 23 06:54:08.449: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-7546 exec execpod-affinitynzq8l -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport-transition 80
nc: connect to affinity-nodeport-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 06:54:09.449: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-7546 exec execpod-affinitynzq8l -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
Jun 23 06:54:10.626: INFO: rc: 1
Jun 23 06:54:10.626: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-7546 exec execpod-affinitynzq8l -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport-transition 80
nc: connect to affinity-nodeport-transition port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 06:54:11.450: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-7546 exec execpod-affinitynzq8l -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
Jun 23 06:54:11.632: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n"
Jun 23 06:54:11.632: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun 23 06:54:11.632: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-7546 exec execpod-affinitynzq8l -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.134.217 80'
... skipping 76 lines ...
• [SLOW TEST:44.376 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":357,"completed":115,"skipped":2242,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Security Context
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 06:54:45.328: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-a06a9bd0-161e-4d26-8ea7-d372ab26c41d" in namespace "security-context-test-7408" to be "Succeeded or Failed"
Jun 23 06:54:45.335: INFO: Pod "busybox-privileged-false-a06a9bd0-161e-4d26-8ea7-d372ab26c41d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.945715ms
Jun 23 06:54:47.339: INFO: Pod "busybox-privileged-false-a06a9bd0-161e-4d26-8ea7-d372ab26c41d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010795767s
Jun 23 06:54:49.339: INFO: Pod "busybox-privileged-false-a06a9bd0-161e-4d26-8ea7-d372ab26c41d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010950944s
Jun 23 06:54:49.339: INFO: Pod "busybox-privileged-false-a06a9bd0-161e-4d26-8ea7-d372ab26c41d" satisfied condition "Succeeded or Failed"
Jun 23 06:54:49.348: INFO: Got logs for pod "busybox-privileged-false-a06a9bd0-161e-4d26-8ea7-d372ab26c41d": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 06:54:49.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-7408" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":116,"skipped":2256,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 18 lines ...
• [SLOW TEST:15.474 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":357,"completed":117,"skipped":2270,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should validate Statefulset Status endpoints [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 45 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should validate Statefulset Status endpoints [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":357,"completed":118,"skipped":2301,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:7.904 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":357,"completed":119,"skipped":2310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-9e46b182-79ca-48b1-b144-729c29497270
STEP: Creating a pod to test consume configMaps
Jun 23 06:55:33.241: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329" in namespace "projected-5286" to be "Succeeded or Failed"
Jun 23 06:55:33.255: INFO: Pod "pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329": Phase="Pending", Reason="", readiness=false. Elapsed: 14.507623ms
Jun 23 06:55:35.415: INFO: Pod "pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174134972s
Jun 23 06:55:37.277: INFO: Pod "pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03621322s
STEP: Saw pod success
Jun 23 06:55:37.277: INFO: Pod "pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329" satisfied condition "Succeeded or Failed"
Jun 23 06:55:37.317: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:55:37.448: INFO: Waiting for pod pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329 to disappear
Jun 23 06:55:37.453: INFO: Pod pod-projected-configmaps-3dae33c7-64c3-42ca-99ed-b32cd8610329 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 06:55:37.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5286" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":120,"skipped":2361,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should list and delete a collection of ReplicaSets [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicaSet
... skipping 22 lines ...
• [SLOW TEST:5.207 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":357,"completed":121,"skipped":2377,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 28 lines ...
test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  test/e2e/apimachinery/crd_watch.go:44
    watch on custom resource definition objects [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":357,"completed":122,"skipped":2387,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 20 lines ...
• [SLOW TEST:11.127 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":357,"completed":123,"skipped":2390,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  test/e2e/framework/framework.go:187
Jun 23 06:57:00.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4196" for this suite.
STEP: Destroying namespace "webhook-4196-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":357,"completed":124,"skipped":2398,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 12 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  test/e2e/framework/framework.go:187
Jun 23 06:57:01.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9863" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":357,"completed":125,"skipped":2439,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 21 lines ...
  test/e2e/framework/framework.go:187
Jun 23 06:57:04.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3292" for this suite.
STEP: Destroying namespace "webhook-3292-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":357,"completed":126,"skipped":2461,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 06:57:04.704: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:145
[It] should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jun 23 06:57:04.878: INFO: DaemonSet pods can't tolerate node kt2-d118eff5-f2b9-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun 23 06:57:04.888: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 0
Jun 23 06:57:04.888: INFO: Node kt2-d118eff5-f2b9-minion-group-h59d is running 0 daemon pod, expected 1
... skipping 3 lines ...
Jun 23 06:57:06.915: INFO: DaemonSet pods can't tolerate node kt2-d118eff5-f2b9-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun 23 06:57:06.931: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 2
Jun 23 06:57:06.931: INFO: Node kt2-d118eff5-f2b9-minion-group-h59d is running 0 daemon pod, expected 1
Jun 23 06:57:07.894: INFO: DaemonSet pods can't tolerate node kt2-d118eff5-f2b9-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun 23 06:57:07.897: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3
Jun 23 06:57:07.897: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jun 23 06:57:07.937: INFO: DaemonSet pods can't tolerate node kt2-d118eff5-f2b9-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Jun 23 06:57:07.946: INFO: Number of nodes with available pods controlled by daemonset daemon-set: 3
Jun 23 06:57:07.946: INFO: Number of running nodes: 3, number of available pods: 3 in daemonset daemon-set
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  test/e2e/apps/daemon_set.go:110
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7690, will wait for the garbage collector to delete the pods
Jun 23 06:57:09.036: INFO: Deleting DaemonSet.extensions daemon-set took: 6.078918ms
Jun 23 06:57:09.137: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.341915ms
... skipping 8 lines ...
Jun 23 06:57:12.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7690" for this suite.

• [SLOW TEST:7.368 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":357,"completed":127,"skipped":2536,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicationController
... skipping 20 lines ...
• [SLOW TEST:6.124 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":357,"completed":128,"skipped":2555,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicaSet
... skipping 18 lines ...
Jun 23 06:57:21.298: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  test/e2e/framework/framework.go:187
Jun 23 06:57:21.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-193" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":357,"completed":129,"skipped":2648,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 17 lines ...
Jun 23 06:57:23.545: INFO: ExecWithOptions: execute(POST https://35.202.0.82/api/v1/namespaces/emptydir-7900/pods/pod-sharedvolume-589fa3d6-f025-4a9f-8220-8233b95468c9/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true)
Jun 23 06:57:23.776: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 06:57:23.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7900" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":357,"completed":130,"skipped":2661,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 24 lines ...
STEP: verifying the updated pod is in kubernetes
Jun 23 06:57:26.422: INFO: Pod update OK
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
Jun 23 06:57:26.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6962" for this suite.
•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":357,"completed":131,"skipped":2673,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 16 lines ...
STEP: Updating configmap projected-configmap-test-upd-c15e8914-9a89-4344-b30d-a7772b70260d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 06:57:30.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-414" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":357,"completed":132,"skipped":2674,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-map-021984de-56da-456c-bfcf-6e026f53cf4c
STEP: Creating a pod to test consume configMaps
Jun 23 06:57:31.800: INFO: Waiting up to 5m0s for pod "pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16" in namespace "configmap-9122" to be "Succeeded or Failed"
Jun 23 06:57:31.840: INFO: Pod "pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16": Phase="Pending", Reason="", readiness=false. Elapsed: 40.04759ms
Jun 23 06:57:33.845: INFO: Pod "pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044739085s
Jun 23 06:57:35.847: INFO: Pod "pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046825716s
Jun 23 06:57:37.847: INFO: Pod "pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046933171s
STEP: Saw pod success
Jun 23 06:57:37.847: INFO: Pod "pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16" satisfied condition "Succeeded or Failed"
Jun 23 06:57:37.851: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:57:37.914: INFO: Waiting for pod pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16 to disappear
Jun 23 06:57:37.918: INFO: Pod pod-configmaps-c470098d-02f9-44b9-9357-af44b6c29b16 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.526 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":357,"completed":133,"skipped":2720,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Pods 
  should run through the lifecycle of Pods and PodStatus [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 24 lines ...
Jun 23 06:57:42.522: INFO: observed event type MODIFIED
Jun 23 06:57:42.532: INFO: observed event type MODIFIED
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
Jun 23 06:57:42.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3269" for this suite.
•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":357,"completed":134,"skipped":2728,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Security Context
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 06:57:42.630: INFO: Waiting up to 5m0s for pod "busybox-user-65534-996b2a96-c433-4eb6-9cf4-22499a62a5e4" in namespace "security-context-test-4552" to be "Succeeded or Failed"
Jun 23 06:57:42.638: INFO: Pod "busybox-user-65534-996b2a96-c433-4eb6-9cf4-22499a62a5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055946ms
Jun 23 06:57:44.642: INFO: Pod "busybox-user-65534-996b2a96-c433-4eb6-9cf4-22499a62a5e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012025057s
Jun 23 06:57:46.643: INFO: Pod "busybox-user-65534-996b2a96-c433-4eb6-9cf4-22499a62a5e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012501603s
Jun 23 06:57:46.643: INFO: Pod "busybox-user-65534-996b2a96-c433-4eb6-9cf4-22499a62a5e4" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 06:57:46.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4552" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":135,"skipped":2769,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun 23 06:57:46.683: INFO: Asynchronously running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-2305 proxy --unix-socket=/tmp/kubectl-proxy-unix3385730033/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 06:57:46.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2305" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":357,"completed":136,"skipped":2770,"failed":0}
S
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Downward API
... skipping 3 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 06:57:46.778: INFO: Waiting up to 5m0s for pod "downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4" in namespace "downward-api-5519" to be "Succeeded or Failed"
Jun 23 06:57:46.783: INFO: Pod "downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.121299ms
Jun 23 06:57:48.788: INFO: Pod "downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010088529s
Jun 23 06:57:50.788: INFO: Pod "downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009666328s
STEP: Saw pod success
Jun 23 06:57:50.788: INFO: Pod "downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4" satisfied condition "Succeeded or Failed"
Jun 23 06:57:50.791: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4 container dapi-container: <nil>
STEP: delete the pod
Jun 23 06:57:50.815: INFO: Waiting for pod downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4 to disappear
Jun 23 06:57:50.820: INFO: Pod downward-api-5a79bc41-3e00-43df-804e-0d27aafbd0d4 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
Jun 23 06:57:50.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5519" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":357,"completed":137,"skipped":2771,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun 23 06:57:50.944: INFO: stderr: "WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.\n"
Jun 23 06:57:50.944: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"25+\", GitVersion:\"v1.25.0-alpha.1.99+0669ba386bde2e\", GitCommit:\"0669ba386bde2e756bc9c6779ad4a4f036200f28\", GitTreeState:\"clean\", BuildDate:\"2022-06-23T03:09:43Z\", GoVersion:\"go1.18.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nKustomize Version: v4.5.4\nServer Version: version.Info{Major:\"1\", Minor:\"25+\", GitVersion:\"v1.25.0-alpha.1.99+0669ba386bde2e\", GitCommit:\"0669ba386bde2e756bc9c6779ad4a4f036200f28\", GitTreeState:\"clean\", BuildDate:\"2022-06-23T03:09:43Z\", GoVersion:\"go1.18.3\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 06:57:50.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9978" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":357,"completed":138,"skipped":2838,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:9.142 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":357,"completed":139,"skipped":2876,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be immutable if `immutable` field is set [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 6 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 06:58:00.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-259" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":357,"completed":140,"skipped":2920,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should support creating EndpointSlice API operations [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] EndpointSlice
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] EndpointSlice
  test/e2e/framework/framework.go:187
Jun 23 06:58:00.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-304" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":357,"completed":141,"skipped":2953,"failed":0}
SSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 10 lines ...
STEP: creating the pod
Jun 23 06:58:00.509: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 06:58:04.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2364" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":357,"completed":142,"skipped":2961,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 19 lines ...
STEP: deleting the pod gracefully
STEP: verifying pod deletion was observed
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
Jun 23 06:58:09.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1241" for this suite.
•{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":357,"completed":143,"skipped":2973,"failed":0}

------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 54 lines ...
• [SLOW TEST:23.401 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":357,"completed":144,"skipped":2973,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 43 lines ...
• [SLOW TEST:22.169 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":357,"completed":145,"skipped":2994,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] server version
... skipping 12 lines ...
Jun 23 06:58:55.114: INFO: cleanMinorVersion: 25
Jun 23 06:58:55.114: INFO: Minor version: 25+
[AfterEach] [sig-api-machinery] server version
  test/e2e/framework/framework.go:187
Jun 23 06:58:55.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-5899" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":357,"completed":146,"skipped":3003,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-58392f3f-d7f7-49df-8a0a-d2bc34b893bf
STEP: Creating a pod to test consume configMaps
Jun 23 06:58:55.235: INFO: Waiting up to 5m0s for pod "pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf" in namespace "configmap-7688" to be "Succeeded or Failed"
Jun 23 06:58:55.242: INFO: Pod "pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.58929ms
Jun 23 06:58:57.247: INFO: Pod "pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011814196s
Jun 23 06:58:59.250: INFO: Pod "pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014949942s
STEP: Saw pod success
Jun 23 06:58:59.250: INFO: Pod "pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf" satisfied condition "Succeeded or Failed"
Jun 23 06:58:59.252: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf container agnhost-container: <nil>
STEP: delete the pod
Jun 23 06:58:59.274: INFO: Waiting for pod pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf to disappear
Jun 23 06:58:59.278: INFO: Pod pod-configmaps-38e43015-018f-4b87-9c91-b2b173d354cf no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 06:58:59.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7688" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":357,"completed":147,"skipped":3023,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 33 lines ...
• [SLOW TEST:5.234 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":357,"completed":148,"skipped":3025,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 06:59:04.523: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/common/node/init_container.go:164
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: creating the pod
Jun 23 06:59:04.595: INFO: PodSpec: initContainers in spec.initContainers
Jun 23 06:59:47.962: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9beba2ec-b774-404a-a9e7-ffd0a670ffa0", GenerateName:"", Namespace:"init-container-1570", SelfLink:"", UID:"de36a057-307d-410f-ae16-0d7eff6e1888", ResourceVersion:"9561", Generation:0, CreationTimestamp:time.Date(2022, time.June, 23, 6, 59, 4, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"595032664"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 6, 59, 4, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003868060), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2022, time.June, 23, 6, 59, 47, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003868090), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-s2vkg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc003dd2060), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-s2vkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-s2vkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"registry.k8s.io/pause:3.7", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-s2vkg", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00279e0d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kt2-d118eff5-f2b9-minion-group-jjkh", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e82000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00279e150)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00279e170)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00279e178), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00279e17c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003756040), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 6, 59, 4, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 6, 59, 4, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 6, 59, 4, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2022, time.June, 23, 6, 59, 4, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.128.0.4", PodIP:"10.64.0.174", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.0.174"}}, StartTime:time.Date(2022, time.June, 23, 6, 59, 4, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e820e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002e821c0)}, Ready:false, RestartCount:3, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", ImageID:"registry.k8s.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://50e82c6340165717c40344293f6ac73e54474f82181510c38fe133aa152060be", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003dd2180), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003dd2160), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"registry.k8s.io/pause:3.7", ImageID:"", ContainerID:"", Started:(*bool)(0xc00279e1ff)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 06:59:47.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1570" for this suite.

• [SLOW TEST:43.451 seconds]
[sig-node] InitContainer [NodeConformance]
test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":357,"completed":149,"skipped":3048,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 56 lines ...
• [SLOW TEST:11.553 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":357,"completed":150,"skipped":3049,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Services 
  should test the lifecycle of an Endpoint [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 20 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:187
Jun 23 06:59:59.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1830" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762
•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":357,"completed":151,"skipped":3057,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 19 lines ...
Jun 23 07:00:03.741: INFO: Pod "pod-update-activedeadlineseconds-89bff070-5e7f-4bba-8319-c21b3f806992" satisfied condition "running and ready"
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jun 23 07:00:04.260: INFO: Successfully updated pod "pod-update-activedeadlineseconds-89bff070-5e7f-4bba-8319-c21b3f806992"
Jun 23 07:00:04.260: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-89bff070-5e7f-4bba-8319-c21b3f806992" in namespace "pods-1305" to be "terminated with reason DeadlineExceeded"
Jun 23 07:00:04.276: INFO: Pod "pod-update-activedeadlineseconds-89bff070-5e7f-4bba-8319-c21b3f806992": Phase="Running", Reason="", readiness=true. Elapsed: 16.63747ms
Jun 23 07:00:06.321: INFO: Pod "pod-update-activedeadlineseconds-89bff070-5e7f-4bba-8319-c21b3f806992": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.061428249s
Jun 23 07:00:06.321: INFO: Pod "pod-update-activedeadlineseconds-89bff070-5e7f-4bba-8319-c21b3f806992" satisfied condition "terminated with reason DeadlineExceeded"
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
Jun 23 07:00:06.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1305" for this suite.

• [SLOW TEST:6.695 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":357,"completed":152,"skipped":3066,"failed":0}
SSSSSSSSS
------------------------------
[sig-instrumentation] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-instrumentation] Events
... skipping 12 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-instrumentation] Events
  test/e2e/framework/framework.go:187
Jun 23 07:00:06.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2317" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":357,"completed":153,"skipped":3075,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 23 07:00:06.823: INFO: Waiting up to 5m0s for pod "pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9" in namespace "emptydir-2390" to be "Succeeded or Failed"
Jun 23 07:00:06.835: INFO: Pod "pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.626254ms
Jun 23 07:00:08.840: INFO: Pod "pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016415331s
Jun 23 07:00:10.840: INFO: Pod "pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016787474s
STEP: Saw pod success
Jun 23 07:00:10.840: INFO: Pod "pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9" satisfied condition "Succeeded or Failed"
Jun 23 07:00:10.844: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9 container test-container: <nil>
STEP: delete the pod
Jun 23 07:00:10.868: INFO: Waiting for pod pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9 to disappear
Jun 23 07:00:10.872: INFO: Pod pod-3d24bc21-8dc5-4884-82f5-e2ec4a1069e9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 07:00:10.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2390" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":154,"skipped":3085,"failed":0}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 20 lines ...
Jun 23 07:00:13.080: INFO: stderr: ""
Jun 23 07:00:13.080: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 07:00:13.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1330" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":357,"completed":155,"skipped":3094,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 26 lines ...
Jun 23 07:00:15.508: INFO: Pod "test-recreate-deployment-6ff6c9b95f-sz4zp" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-6ff6c9b95f-sz4zp test-recreate-deployment-6ff6c9b95f- deployment-5893  64c6d989-ce25-4085-b6e7-c0b0bd691a48 9782 0 2022-06-23 07:00:15 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:6ff6c9b95f] map[] [{apps/v1 ReplicaSet test-recreate-deployment-6ff6c9b95f d1a2e369-0ae6-412a-afc4-1fb28812ba5b 0xc0043af267 0xc0043af268}] [] [{kube-controller-manager Update v1 2022-06-23 07:00:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d1a2e369-0ae6-412a-afc4-1fb28812ba5b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:00:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vb6t5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vb6t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-jjkh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:00:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:00:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:00:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:00:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2022-06-23 07:00:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:187
Jun 23 07:00:15.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5893" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":357,"completed":156,"skipped":3107,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 70 lines ...
test/e2e/kubectl/framework.go:23
  Update Demo
  test/e2e/kubectl/kubectl.go:322
    should create and stop a replication controller  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":357,"completed":157,"skipped":3120,"failed":0}
SSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-instrumentation] Events API
... skipping 21 lines ...
STEP: listing events in all namespaces
STEP: listing events in test namespace
[AfterEach] [sig-instrumentation] Events API
  test/e2e/framework/framework.go:187
Jun 23 07:00:22.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-2929" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":357,"completed":158,"skipped":3130,"failed":0}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Networking
... skipping 80 lines ...
test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  test/e2e/common/network/networking.go:32
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":159,"skipped":3131,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 115 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":357,"completed":160,"skipped":3139,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 16 lines ...
Jun 23 07:01:54.806: INFO: Waiting up to 5m0s for pod "execpod-affinityhlk2j" in namespace "services-2655" to be "running"
Jun 23 07:01:54.821: INFO: Pod "execpod-affinityhlk2j": Phase="Pending", Reason="", readiness=false. Elapsed: 14.204264ms
Jun 23 07:01:56.833: INFO: Pod "execpod-affinityhlk2j": Phase="Running", Reason="", readiness=true. Elapsed: 2.02660782s
Jun 23 07:01:56.833: INFO: Pod "execpod-affinityhlk2j" satisfied condition "running"
Jun 23 07:01:57.845: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2655 exec execpod-affinityhlk2j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Jun 23 07:01:59.041: INFO: rc: 1
Jun 23 07:01:59.041: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2655 exec execpod-affinityhlk2j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 07:02:00.041: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2655 exec execpod-affinityhlk2j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Jun 23 07:02:01.289: INFO: rc: 1
Jun 23 07:02:01.289: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2655 exec execpod-affinityhlk2j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport 80
+ echo hostName
nc: connect to affinity-nodeport port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 07:02:02.042: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2655 exec execpod-affinityhlk2j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Jun 23 07:02:02.202: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n"
Jun 23 07:02:02.202: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun 23 07:02:02.202: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2655 exec execpod-affinityhlk2j -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.47.50 80'
... skipping 38 lines ...
• [SLOW TEST:13.612 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":357,"completed":161,"skipped":3154,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jun 23 07:02:05.394: INFO: Waiting up to 5m0s for pod "pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e" in namespace "emptydir-9527" to be "Succeeded or Failed"
Jun 23 07:02:05.403: INFO: Pod "pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.268273ms
Jun 23 07:02:07.408: INFO: Pod "pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013397567s
Jun 23 07:02:09.416: INFO: Pod "pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021460994s
STEP: Saw pod success
Jun 23 07:02:09.416: INFO: Pod "pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e" satisfied condition "Succeeded or Failed"
Jun 23 07:02:09.421: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e container test-container: <nil>
STEP: delete the pod
Jun 23 07:02:09.463: INFO: Waiting for pod pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e to disappear
Jun 23 07:02:09.472: INFO: Pod pod-aea7ed85-3689-4e0f-b993-93cd0f47a44e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 07:02:09.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9527" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":162,"skipped":3156,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-fe05e693-badd-49c5-8c5a-7fb5c609a1e4
STEP: Creating a pod to test consume configMaps
Jun 23 07:02:09.538: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec" in namespace "projected-9506" to be "Succeeded or Failed"
Jun 23 07:02:09.545: INFO: Pod "pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.522ms
Jun 23 07:02:11.549: INFO: Pod "pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010326235s
Jun 23 07:02:13.551: INFO: Pod "pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012219184s
STEP: Saw pod success
Jun 23 07:02:13.551: INFO: Pod "pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec" satisfied condition "Succeeded or Failed"
Jun 23 07:02:13.554: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:02:13.586: INFO: Waiting for pod pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec to disappear
Jun 23 07:02:13.592: INFO: Pod pod-projected-configmaps-98278329-4570-4dd2-b462-0e3c1d7ba1ec no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 07:02:13.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9506" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":357,"completed":163,"skipped":3160,"failed":0}
SSSSSS
------------------------------
[sig-node] PodTemplates 
  should replace a pod template [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] PodTemplates
... skipping 10 lines ...
Jun 23 07:02:13.696: INFO: Found updated podtemplate annotation: "true"

[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:187
Jun 23 07:02:13.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-2023" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should replace a pod template [Conformance]","total":357,"completed":164,"skipped":3166,"failed":0}
S
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Service endpoints latency
... skipping 425 lines ...
• [SLOW TEST:10.808 seconds]
[sig-network] Service endpoints latency
test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":357,"completed":165,"skipped":3167,"failed":0}
SSSSS
------------------------------
[sig-node] Lease 
  lease API should be available [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Lease
... skipping 6 lines ...
[It] lease API should be available [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-node] Lease
  test/e2e/framework/framework.go:187
Jun 23 07:02:24.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-7059" for this suite.
•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":357,"completed":166,"skipped":3172,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 110 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":357,"completed":167,"skipped":3183,"failed":0}
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 2 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-map-a2954922-c03e-40b8-8bd7-d8dd887f6feb
STEP: Creating a pod to test consume secrets
Jun 23 07:03:36.990: INFO: Waiting up to 5m0s for pod "pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441" in namespace "secrets-2505" to be "Succeeded or Failed"
Jun 23 07:03:36.996: INFO: Pod "pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441": Phase="Pending", Reason="", readiness=false. Elapsed: 6.550856ms
Jun 23 07:03:39.001: INFO: Pod "pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011217278s
Jun 23 07:03:41.002: INFO: Pod "pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012731579s
STEP: Saw pod success
Jun 23 07:03:41.002: INFO: Pod "pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441" satisfied condition "Succeeded or Failed"
Jun 23 07:03:41.007: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:03:41.042: INFO: Waiting for pod pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441 to disappear
Jun 23 07:03:41.049: INFO: Pod pod-secrets-dc206592-aa0c-4e72-856c-ce300fb28441 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:03:41.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2505" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":357,"completed":168,"skipped":3183,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 16 lines ...
test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  test/e2e/apimachinery/custom_resource_definition.go:50
    listing custom resource definition objects works  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":357,"completed":169,"skipped":3199,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-eb51799b-5389-412e-970c-7b18ea4e6854
STEP: Creating a pod to test consume secrets
Jun 23 07:03:48.133: INFO: Waiting up to 5m0s for pod "pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f" in namespace "secrets-5476" to be "Succeeded or Failed"
Jun 23 07:03:48.140: INFO: Pod "pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.654464ms
I0623 07:03:48.664810    2928 boskos.go:86] Sending heartbeat to Boskos
Jun 23 07:03:50.157: INFO: Pod "pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02461821s
Jun 23 07:03:52.155: INFO: Pod "pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021763958s
STEP: Saw pod success
Jun 23 07:03:52.155: INFO: Pod "pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f" satisfied condition "Succeeded or Failed"
Jun 23 07:03:52.159: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:03:52.185: INFO: Waiting for pod pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f to disappear
Jun 23 07:03:52.190: INFO: Pod pod-secrets-a4650fa0-3016-4a42-8e3b-3ee017f1954f no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:03:52.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5476" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":357,"completed":170,"skipped":3240,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir volume type on tmpfs
Jun 23 07:03:52.280: INFO: Waiting up to 5m0s for pod "pod-eb2debad-4b0f-497d-8a3d-8106dce9678c" in namespace "emptydir-7854" to be "Succeeded or Failed"
Jun 23 07:03:52.286: INFO: Pod "pod-eb2debad-4b0f-497d-8a3d-8106dce9678c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.124444ms
Jun 23 07:03:54.290: INFO: Pod "pod-eb2debad-4b0f-497d-8a3d-8106dce9678c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009920982s
Jun 23 07:03:56.291: INFO: Pod "pod-eb2debad-4b0f-497d-8a3d-8106dce9678c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010938005s
STEP: Saw pod success
Jun 23 07:03:56.291: INFO: Pod "pod-eb2debad-4b0f-497d-8a3d-8106dce9678c" satisfied condition "Succeeded or Failed"
Jun 23 07:03:56.294: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-eb2debad-4b0f-497d-8a3d-8106dce9678c container test-container: <nil>
STEP: delete the pod
Jun 23 07:03:56.322: INFO: Waiting for pod pod-eb2debad-4b0f-497d-8a3d-8106dce9678c to disappear
Jun 23 07:03:56.325: INFO: Pod pod-eb2debad-4b0f-497d-8a3d-8106dce9678c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 07:03:56.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7854" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":171,"skipped":3287,"failed":0}
SS
------------------------------
[sig-node] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-5c1d045e-f6b8-4f4d-9a55-8ba027b6d2cc
STEP: Creating a pod to test consume secrets
Jun 23 07:03:56.419: INFO: Waiting up to 5m0s for pod "pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c" in namespace "secrets-8015" to be "Succeeded or Failed"
Jun 23 07:03:56.425: INFO: Pod "pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.676502ms
Jun 23 07:03:58.431: INFO: Pod "pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012136206s
Jun 23 07:04:00.431: INFO: Pod "pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012284947s
STEP: Saw pod success
Jun 23 07:04:00.431: INFO: Pod "pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c" satisfied condition "Succeeded or Failed"
Jun 23 07:04:00.434: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c container secret-env-test: <nil>
STEP: delete the pod
Jun 23 07:04:00.457: INFO: Waiting for pod pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c to disappear
Jun 23 07:04:00.462: INFO: Pod pod-secrets-98a6f91e-ed9e-4edf-bcad-d1e10724282c no longer exists
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:04:00.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8015" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":357,"completed":172,"skipped":3289,"failed":0}
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 99 lines ...
• [SLOW TEST:17.307 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":357,"completed":173,"skipped":3291,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 23 07:04:17.833: INFO: Waiting up to 5m0s for pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc" in namespace "emptydir-5732" to be "Succeeded or Failed"
Jun 23 07:04:17.838: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472526ms
Jun 23 07:04:19.850: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016761434s
Jun 23 07:04:21.857: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024044327s
Jun 23 07:04:23.847: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.013950819s
Jun 23 07:04:25.844: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011203777s
Jun 23 07:04:27.843: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.009469422s
... skipping 5 lines ...
Jun 23 07:04:39.843: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Pending", Reason="", readiness=false. Elapsed: 22.010279518s
Jun 23 07:04:41.841: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Running", Reason="", readiness=false. Elapsed: 24.008248034s
Jun 23 07:04:43.843: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Running", Reason="", readiness=false. Elapsed: 26.010217281s
Jun 23 07:04:45.843: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Running", Reason="", readiness=false. Elapsed: 28.010257385s
Jun 23 07:04:47.843: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.009316001s
STEP: Saw pod success
Jun 23 07:04:47.843: INFO: Pod "pod-9da8451a-8777-4209-8cea-e2f30e917dcc" satisfied condition "Succeeded or Failed"
Jun 23 07:04:47.846: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-9da8451a-8777-4209-8cea-e2f30e917dcc container test-container: <nil>
STEP: delete the pod
Jun 23 07:04:47.871: INFO: Waiting for pod pod-9da8451a-8777-4209-8cea-e2f30e917dcc to disappear
Jun 23 07:04:47.875: INFO: Pod pod-9da8451a-8777-4209-8cea-e2f30e917dcc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:30.107 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":174,"skipped":3310,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 16 lines ...
Jun 23 07:04:53.489: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  test/e2e/framework/framework.go:647
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 07:05:05.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8694" for this suite.
STEP: Destroying namespace "webhook-8694-markers" for this suite.
... skipping 3 lines ...
• [SLOW TEST:17.948 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":357,"completed":175,"skipped":3317,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Downward API
... skipping 3 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 07:05:05.921: INFO: Waiting up to 5m0s for pod "downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef" in namespace "downward-api-1833" to be "Succeeded or Failed"
Jun 23 07:05:05.926: INFO: Pod "downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.630102ms
Jun 23 07:05:07.931: INFO: Pod "downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010541762s
Jun 23 07:05:09.933: INFO: Pod "downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011943177s
STEP: Saw pod success
Jun 23 07:05:09.933: INFO: Pod "downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef" satisfied condition "Succeeded or Failed"
Jun 23 07:05:09.936: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:05:09.984: INFO: Waiting for pod downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef to disappear
Jun 23 07:05:09.994: INFO: Pod downward-api-39bf6590-7245-45a0-8b70-3c90b20dd4ef no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
Jun 23 07:05:09.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1833" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":357,"completed":176,"skipped":3337,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-test-volume-map-b5acd641-f52a-437a-8bf4-cbffec43375d
STEP: Creating a pod to test consume configMaps
Jun 23 07:05:10.089: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1" in namespace "configmap-9985" to be "Succeeded or Failed"
Jun 23 07:05:10.098: INFO: Pod "pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.786443ms
Jun 23 07:05:12.106: INFO: Pod "pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016524354s
Jun 23 07:05:14.104: INFO: Pod "pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014461722s
Jun 23 07:05:16.127: INFO: Pod "pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037130348s
STEP: Saw pod success
Jun 23 07:05:16.127: INFO: Pod "pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1" satisfied condition "Succeeded or Failed"
Jun 23 07:05:16.159: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:05:16.219: INFO: Waiting for pod pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1 to disappear
Jun 23 07:05:16.228: INFO: Pod pod-configmaps-ffa2cad9-adb2-4242-80e7-ca0dd13b79b1 no longer exists
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.258 seconds]
[sig-storage] ConfigMap
test/e2e/common/storage/framework.go:23
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":357,"completed":177,"skipped":3386,"failed":0}
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 42 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl expose
  test/e2e/kubectl/kubectl.go:1398
    should create services for rc  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":357,"completed":178,"skipped":3386,"failed":0}
SSSSS
------------------------------
[sig-network] HostPort 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] HostPort
... skipping 57 lines ...
• [SLOW TEST:13.544 seconds]
[sig-network] HostPort
test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":357,"completed":179,"skipped":3391,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:187
Jun 23 07:05:36.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2147" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":357,"completed":180,"skipped":3507,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 29 lines ...
• [SLOW TEST:144.769 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":357,"completed":181,"skipped":3522,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 81 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:187
Jun 23 07:08:05.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4930" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:83
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":357,"completed":182,"skipped":3531,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should verify changes to a daemon set status [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 66 lines ...
• [SLOW TEST:5.979 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should verify changes to a daemon set status [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":357,"completed":183,"skipped":3545,"failed":0}
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
... skipping 57 lines ...
• [SLOW TEST:78.450 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
test/e2e/scheduling/framework.go:40
  validates lower priority pod preemption by critical pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":357,"completed":184,"skipped":3545,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
... skipping 3 lines ...
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test substitution in container's command
Jun 23 07:09:29.882: INFO: Waiting up to 5m0s for pod "var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01" in namespace "var-expansion-3232" to be "Succeeded or Failed"
Jun 23 07:09:29.888: INFO: Pod "var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01": Phase="Pending", Reason="", readiness=false. Elapsed: 5.955932ms
Jun 23 07:09:31.891: INFO: Pod "var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009819719s
Jun 23 07:09:33.893: INFO: Pod "var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011187065s
STEP: Saw pod success
Jun 23 07:09:33.893: INFO: Pod "var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01" satisfied condition "Succeeded or Failed"
Jun 23 07:09:33.896: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01 container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:09:33.938: INFO: Waiting for pod var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01 to disappear
Jun 23 07:09:33.942: INFO: Pod var-expansion-32fdf300-9035-4fbe-8b05-243bf6b26c01 no longer exists
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
Jun 23 07:09:33.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3232" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":357,"completed":185,"skipped":3553,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints 
  verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 38 lines ...
test/e2e/scheduling/framework.go:40
  PriorityClass endpoints
  test/e2e/scheduling/preemption.go:683
    verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":357,"completed":186,"skipped":3555,"failed":0}
S
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:10:34.258: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 07:10:34.300: INFO: Waiting up to 2m0s for pod "var-expansion-4d1d852e-fe00-457f-8d6e-83be16ea34f9" in namespace "var-expansion-4168" to be "container 0 failed with reason CreateContainerConfigError"
Jun 23 07:10:34.311: INFO: Pod "var-expansion-4d1d852e-fe00-457f-8d6e-83be16ea34f9": Phase="Pending", Reason="", readiness=false. Elapsed: 10.339844ms
Jun 23 07:10:36.315: INFO: Pod "var-expansion-4d1d852e-fe00-457f-8d6e-83be16ea34f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015148826s
Jun 23 07:10:36.315: INFO: Pod "var-expansion-4d1d852e-fe00-457f-8d6e-83be16ea34f9" satisfied condition "container 0 failed with reason CreateContainerConfigError"
Jun 23 07:10:36.315: INFO: Deleting pod "var-expansion-4d1d852e-fe00-457f-8d6e-83be16ea34f9" in namespace "var-expansion-4168"
Jun 23 07:10:36.324: INFO: Wait up to 5m0s for pod "var-expansion-4d1d852e-fe00-457f-8d6e-83be16ea34f9" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
Jun 23 07:10:38.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4168" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":357,"completed":187,"skipped":3556,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:242.964 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":357,"completed":188,"skipped":3608,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 96 lines ...
• [SLOW TEST:44.984 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":357,"completed":189,"skipped":3654,"failed":0}
S
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 11 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
Jun 23 07:15:26.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3048" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":357,"completed":190,"skipped":3655,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be immutable if `immutable` field is set [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 6 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 07:15:26.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3203" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":357,"completed":191,"skipped":3666,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 43 lines ...
Jun 23 07:15:31.453: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=crd-publish-openapi-9145 explain e2e-test-crd-publish-openapi-9124-crds.spec'
Jun 23 07:15:31.625: INFO: stderr: ""
Jun 23 07:15:31.625: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-9124-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jun 23 07:15:31.625: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=crd-publish-openapi-9145 explain e2e-test-crd-publish-openapi-9124-crds.spec.bars'
Jun 23 07:15:31.787: INFO: stderr: ""
Jun 23 07:15:31.787: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-9124-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   feeling\t<string>\n     Whether Bar is feeling great.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jun 23 07:15:31.787: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=crd-publish-openapi-9145 explain e2e-test-crd-publish-openapi-9124-crds.spec.bars2'
Jun 23 07:15:31.948: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 07:15:34.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9145" for this suite.

• [SLOW TEST:7.837 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":357,"completed":192,"skipped":3672,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Networking
... skipping 80 lines ...
test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  test/e2e/common/network/networking.go:32
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":193,"skipped":3678,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  Deployment should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Jun 23 07:16:01.185: INFO: Pod "test-new-deployment-68c48f9ff9-n29js" is available:
&Pod{ObjectMeta:{test-new-deployment-68c48f9ff9-n29js test-new-deployment-68c48f9ff9- deployment-96  5cceea58-1711-42c7-859d-5c227afa1544 15212 0 2022-06-23 07:15:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:68c48f9ff9] map[] [{apps/v1 ReplicaSet test-new-deployment-68c48f9ff9 2ad3866f-9ee1-41bc-a351-e7828d3f6201 0xc0038d8e10 0xc0038d8e11}] [] [{kube-controller-manager Update v1 2022-06-23 07:15:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ad3866f-9ee1-41bc-a351-e7828d3f6201\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:16:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.0.246\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w6p6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6p6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-jjkh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:15:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:16:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:16:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:15:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:10.64.0.246,StartTime:2022-06-23 07:15:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-23 07:16:00 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://f2e5b73865aba64cc458394762c4337337d90a700fb6242ab4354e088aab7fef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.0.246,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:187
Jun 23 07:16:01.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-96" for this suite.
•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":357,"completed":194,"skipped":3679,"failed":0}
SSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] PodTemplates
... skipping 6 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-node] PodTemplates
  test/e2e/framework/framework.go:187
Jun 23 07:16:01.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-5382" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":357,"completed":195,"skipped":3683,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 31 lines ...
• [SLOW TEST:15.071 seconds]
[sig-api-machinery] Aggregator
test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":357,"completed":196,"skipped":3688,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Runtime
... skipping 13 lines ...
Jun 23 07:16:20.589: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
Jun 23 07:16:20.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1136" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":357,"completed":197,"skipped":3709,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Watchers
... skipping 14 lines ...
Jun 23 07:16:20.701: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-414  aeb1614d-7955-42e9-aeaa-111bd6e5e33a 15396 0 2022-06-23 07:16:20 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-23 07:16:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jun 23 07:16:20.701: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-414  aeb1614d-7955-42e9-aeaa-111bd6e5e33a 15397 0 2022-06-23 07:16:20 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-06-23 07:16:20 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:187
Jun 23 07:16:20.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-414" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":357,"completed":198,"skipped":3720,"failed":0}
S
------------------------------
[sig-node] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Jun 23 07:16:20.709: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  test/e2e/framework/framework.go:647
STEP: creating the pod with failed condition
Jun 23 07:16:20.750: INFO: Waiting up to 2m0s for pod "var-expansion-94c40bf0-d224-4372-977d-1afe4b2e3b84" in namespace "var-expansion-1565" to be "running"
Jun 23 07:16:20.756: INFO: Pod "var-expansion-94c40bf0-d224-4372-977d-1afe4b2e3b84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.724914ms
Jun 23 07:16:22.761: INFO: Pod "var-expansion-94c40bf0-d224-4372-977d-1afe4b2e3b84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011582861s
Jun 23 07:16:24.791: INFO: Pod "var-expansion-94c40bf0-d224-4372-977d-1afe4b2e3b84": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041030775s
Jun 23 07:16:26.761: INFO: Pod "var-expansion-94c40bf0-d224-4372-977d-1afe4b2e3b84": Phase="Pending", Reason="", readiness=false. Elapsed: 6.011120117s
Jun 23 07:16:28.761: INFO: Pod "var-expansion-94c40bf0-d224-4372-977d-1afe4b2e3b84": Phase="Pending", Reason="", readiness=false. Elapsed: 8.011557385s
... skipping 73 lines ...
• [SLOW TEST:154.622 seconds]
[sig-node] Variable Expansion
test/e2e/common/node/framework.go:23
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":357,"completed":199,"skipped":3721,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 34 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should have a working scale subresource [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":357,"completed":200,"skipped":3741,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 10 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  test/e2e/framework/framework.go:187
Jun 23 07:19:15.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3957" for this suite.
STEP: Destroying namespace "nspatchtest-64e8e08c-ed19-4be3-8f42-29df449fac0e-7854" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":357,"completed":201,"skipped":3796,"failed":0}
SSSS
------------------------------
[sig-apps] ReplicaSet 
  should validate Replicaset Status endpoints [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicaSet
... skipping 41 lines ...
• [SLOW TEST:5.203 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  should validate Replicaset Status endpoints [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":357,"completed":202,"skipped":3800,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Downward API
... skipping 3 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 07:19:21.105: INFO: Waiting up to 5m0s for pod "downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62" in namespace "downward-api-2801" to be "Succeeded or Failed"
Jun 23 07:19:21.162: INFO: Pod "downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62": Phase="Pending", Reason="", readiness=false. Elapsed: 56.699927ms
Jun 23 07:19:23.167: INFO: Pod "downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061702453s
Jun 23 07:19:25.169: INFO: Pod "downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063989588s
STEP: Saw pod success
Jun 23 07:19:25.170: INFO: Pod "downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62" satisfied condition "Succeeded or Failed"
Jun 23 07:19:25.175: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62 container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:19:25.226: INFO: Waiting for pod downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62 to disappear
Jun 23 07:19:25.235: INFO: Pod downward-api-f8837583-e643-4be2-92a3-6fb7d3556a62 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
Jun 23 07:19:25.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2801" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":357,"completed":203,"skipped":3815,"failed":0}
SSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap configmap-8548/configmap-test-1101f9eb-f4b4-4bfc-92d4-e28217cc2842
STEP: Creating a pod to test consume configMaps
Jun 23 07:19:25.299: INFO: Waiting up to 5m0s for pod "pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127" in namespace "configmap-8548" to be "Succeeded or Failed"
Jun 23 07:19:25.305: INFO: Pod "pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127": Phase="Pending", Reason="", readiness=false. Elapsed: 6.918372ms
Jun 23 07:19:27.310: INFO: Pod "pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011033065s
Jun 23 07:19:29.310: INFO: Pod "pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01143768s
STEP: Saw pod success
Jun 23 07:19:29.310: INFO: Pod "pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127" satisfied condition "Succeeded or Failed"
Jun 23 07:19:29.314: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127 container env-test: <nil>
STEP: delete the pod
Jun 23 07:19:29.363: INFO: Waiting for pod pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127 to disappear
Jun 23 07:19:29.372: INFO: Pod pod-configmaps-961d0619-ff1d-4a6c-be58-2a3a04912127 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 07:19:29.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8548" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":357,"completed":204,"skipped":3820,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-be3b83bd-9a98-493d-9baf-9be9268f54ba
STEP: Creating a pod to test consume secrets
Jun 23 07:19:29.511: INFO: Waiting up to 5m0s for pod "pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12" in namespace "secrets-4268" to be "Succeeded or Failed"
Jun 23 07:19:29.531: INFO: Pod "pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12": Phase="Pending", Reason="", readiness=false. Elapsed: 19.317603ms
Jun 23 07:19:31.535: INFO: Pod "pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023969211s
Jun 23 07:19:33.536: INFO: Pod "pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024595362s
STEP: Saw pod success
Jun 23 07:19:33.536: INFO: Pod "pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12" satisfied condition "Succeeded or Failed"
Jun 23 07:19:33.539: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:19:33.588: INFO: Waiting for pod pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12 to disappear
Jun 23 07:19:33.593: INFO: Pod pod-secrets-768f9498-1aa4-4b52-b593-cf167d407b12 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:19:33.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4268" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":205,"skipped":3834,"failed":0}
SS
------------------------------
[sig-node] Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Containers
... skipping 3 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override all
Jun 23 07:19:33.645: INFO: Waiting up to 5m0s for pod "client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e" in namespace "containers-2741" to be "Succeeded or Failed"
Jun 23 07:19:33.653: INFO: Pod "client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.856612ms
Jun 23 07:19:35.658: INFO: Pod "client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012478118s
Jun 23 07:19:37.658: INFO: Pod "client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0127484s
STEP: Saw pod success
Jun 23 07:19:37.658: INFO: Pod "client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e" satisfied condition "Succeeded or Failed"
Jun 23 07:19:37.661: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:19:37.683: INFO: Waiting for pod client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e to disappear
Jun 23 07:19:37.688: INFO: Pod client-containers-c6196843-4969-4eec-8ec0-2d18dcb48f0e no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
Jun 23 07:19:37.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2741" for this suite.
•{"msg":"PASSED [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":357,"completed":206,"skipped":3836,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  should run the lifecycle of a Deployment [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 120 lines ...
• [SLOW TEST:8.437 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":357,"completed":207,"skipped":3865,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should succeed in writing subpaths in container [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
... skipping 38 lines ...
• [SLOW TEST:36.838 seconds]
[sig-node] Variable Expansion
test/e2e/common/node/framework.go:23
  should succeed in writing subpaths in container [Slow] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":357,"completed":208,"skipped":3873,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should list, patch and delete a collection of StatefulSets [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 32 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should list, patch and delete a collection of StatefulSets [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":357,"completed":209,"skipped":3920,"failed":0}
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-map-ac456b9e-86da-40d8-8aea-88ecded285b7
STEP: Creating a pod to test consume configMaps
Jun 23 07:20:43.287: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a" in namespace "projected-2398" to be "Succeeded or Failed"
Jun 23 07:20:43.293: INFO: Pod "pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152872ms
Jun 23 07:20:45.299: INFO: Pod "pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011589464s
Jun 23 07:20:47.298: INFO: Pod "pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010827151s
STEP: Saw pod success
Jun 23 07:20:47.298: INFO: Pod "pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a" satisfied condition "Succeeded or Failed"
Jun 23 07:20:47.301: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:20:47.325: INFO: Waiting for pod pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a to disappear
Jun 23 07:20:47.328: INFO: Pod pod-projected-configmaps-a7a2236d-ca02-4b19-bf80-41cc8ede673a no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 07:20:47.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2398" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":357,"completed":210,"skipped":3922,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Ingress API 
  should support creating Ingress API operations [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Ingress API
... skipping 26 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Ingress API
  test/e2e/framework/framework.go:187
Jun 23 07:20:47.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-5498" for this suite.
•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":357,"completed":211,"skipped":3938,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should apply changes to a job status [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Job
... skipping 11 lines ...
STEP: updating /status
STEP: get /status
[AfterEach] [sig-apps] Job
  test/e2e/framework/framework.go:187
Jun 23 07:20:51.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4581" for this suite.
•{"msg":"PASSED [sig-apps] Job should apply changes to a job status [Conformance]","total":357,"completed":212,"skipped":4007,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 59 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should perform rolling updates and roll backs of template modifications [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":357,"completed":213,"skipped":4010,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 23 lines ...
• [SLOW TEST:6.830 seconds]
[sig-storage] Downward API volume
test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":357,"completed":214,"skipped":4011,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 57 lines ...
• [SLOW TEST:9.919 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":357,"completed":215,"skipped":4021,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 21 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:187
Jun 23 07:22:51.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2281" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":357,"completed":216,"skipped":4085,"failed":0}
SSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Job
... skipping 20 lines ...
• [SLOW TEST:34.973 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":357,"completed":217,"skipped":4090,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] RuntimeClass 
  should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] RuntimeClass
... skipping 6 lines ...
[It] should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:187
Jun 23 07:23:26.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-2552" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]","total":357,"completed":218,"skipped":4104,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:6.966 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":357,"completed":219,"skipped":4106,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Jun 23 07:23:33.523: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-4800 create -f -'
Jun 23 07:23:33.810: INFO: stderr: ""
Jun 23 07:23:33.810: INFO: stdout: "pod/pause created\n"
Jun 23 07:23:33.810: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jun 23 07:23:33.810: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4800" to be "running and ready"
Jun 23 07:23:33.814: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.296483ms
Jun 23 07:23:33.814: INFO: Error evaluating pod condition running and ready: want pod 'pause' on '' to be 'Running' but was 'Pending'
Jun 23 07:23:35.819: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.009090222s
Jun 23 07:23:35.819: INFO: Pod "pause" satisfied condition "running and ready"
Jun 23 07:23:35.819: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  test/e2e/framework/framework.go:647
STEP: adding the label testing-label with value testing-label-value to a pod
... skipping 25 lines ...
Jun 23 07:23:36.382: INFO: stderr: ""
Jun 23 07:23:36.382: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 07:23:36.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4800" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":357,"completed":220,"skipped":4138,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 7 lines ...
  test/e2e/framework/framework.go:647
Jun 23 07:23:36.607: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 07:23:37.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7691" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":357,"completed":221,"skipped":4149,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 35 lines ...
• [SLOW TEST:6.159 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":357,"completed":222,"skipped":4151,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-f4f10932-3528-4ff3-b2c1-405a8d87dd31
STEP: Creating a pod to test consume secrets
Jun 23 07:23:43.933: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a" in namespace "projected-6621" to be "Succeeded or Failed"
Jun 23 07:23:43.941: INFO: Pod "pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.754833ms
Jun 23 07:23:45.946: INFO: Pod "pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0132046s
Jun 23 07:23:47.945: INFO: Pod "pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012486374s
STEP: Saw pod success
Jun 23 07:23:47.946: INFO: Pod "pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a" satisfied condition "Succeeded or Failed"
Jun 23 07:23:47.948: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:23:47.973: INFO: Waiting for pod pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a to disappear
Jun 23 07:23:47.976: INFO: Pod pod-projected-secrets-01cce02c-a35e-42b0-99b1-51c5fbccba5a no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 07:23:47.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6621" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":223,"skipped":4153,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 50 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":357,"completed":224,"skipped":4187,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-9f547656-6874-4a22-9a67-44286cc1d911
STEP: Creating a pod to test consume secrets
Jun 23 07:24:58.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5" in namespace "projected-452" to be "Succeeded or Failed"
Jun 23 07:24:58.744: INFO: Pod "pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.485528ms
Jun 23 07:25:00.749: INFO: Pod "pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01042371s
Jun 23 07:25:02.749: INFO: Pod "pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010455342s
STEP: Saw pod success
Jun 23 07:25:02.749: INFO: Pod "pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5" satisfied condition "Succeeded or Failed"
Jun 23 07:25:02.751: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:25:02.777: INFO: Waiting for pod pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5 to disappear
Jun 23 07:25:02.781: INFO: Pod pod-projected-secrets-78064fe4-deee-45b6-baf2-69ed993c7de5 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 07:25:02.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-452" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":357,"completed":225,"skipped":4188,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should schedule multiple jobs concurrently [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] CronJob
... skipping 17 lines ...
• [SLOW TEST:118.094 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":357,"completed":226,"skipped":4209,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should replace jobs when ReplaceConcurrent [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] CronJob
... skipping 20 lines ...
• [SLOW TEST:120.158 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":357,"completed":227,"skipped":4260,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
Jun 23 07:29:06.573: INFO: Waiting up to 5m0s for pod "execpod-affinityc948b" in namespace "services-2389" to be "running"
Jun 23 07:29:06.584: INFO: Pod "execpod-affinityc948b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.531801ms
Jun 23 07:29:08.589: INFO: Pod "execpod-affinityc948b": Phase="Running", Reason="", readiness=true. Elapsed: 2.016423236s
Jun 23 07:29:08.589: INFO: Pod "execpod-affinityc948b" satisfied condition "running"
Jun 23 07:29:09.590: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2389 exec execpod-affinityc948b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Jun 23 07:29:10.784: INFO: rc: 1
Jun 23 07:29:10.784: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2389 exec execpod-affinityc948b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 07:29:11.784: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2389 exec execpod-affinityc948b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Jun 23 07:29:13.027: INFO: rc: 1
Jun 23 07:29:13.027: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2389 exec execpod-affinityc948b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip-timeout 80
nc: connect to affinity-clusterip-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 07:29:13.784: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2389 exec execpod-affinityc948b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80'
Jun 23 07:29:13.942: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n"
Jun 23 07:29:13.942: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun 23 07:29:13.942: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2389 exec execpod-affinityc948b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.68.78 80'
... skipping 38 lines ...
• [SLOW TEST:36.330 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":357,"completed":228,"skipped":4275,"failed":0}
SS
------------------------------
[sig-node] RuntimeClass 
   should support RuntimeClasses API operations [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] RuntimeClass
... skipping 19 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:187
Jun 23 07:29:37.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-1550" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":357,"completed":229,"skipped":4277,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-d6c7dd89-6401-4db6-8249-ef8b627210e8
STEP: Creating a pod to test consume secrets
Jun 23 07:29:37.560: INFO: Waiting up to 5m0s for pod "pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231" in namespace "secrets-4197" to be "Succeeded or Failed"
Jun 23 07:29:37.565: INFO: Pod "pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231": Phase="Pending", Reason="", readiness=false. Elapsed: 5.471478ms
Jun 23 07:29:39.570: INFO: Pod "pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009711299s
Jun 23 07:29:41.575: INFO: Pod "pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015151164s
STEP: Saw pod success
Jun 23 07:29:41.575: INFO: Pod "pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231" satisfied condition "Succeeded or Failed"
Jun 23 07:29:41.582: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231 container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:29:41.631: INFO: Waiting for pod pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231 to disappear
Jun 23 07:29:41.635: INFO: Pod pod-secrets-bdfb74a3-0693-48b8-b227-8ab0d595f231 no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:29:41.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4197" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":230,"skipped":4300,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicationController
... skipping 17 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:187
Jun 23 07:29:44.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6909" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":357,"completed":231,"skipped":4323,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 21 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  test/e2e/framework/framework.go:187
Jun 23 07:29:46.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-431" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [Conformance]","total":357,"completed":232,"skipped":4346,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Networking
... skipping 81 lines ...
test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  test/e2e/common/network/networking.go:32
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":357,"completed":233,"skipped":4362,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates basic preemption works [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 56 lines ...
• [SLOW TEST:74.365 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
test/e2e/scheduling/framework.go:40
  validates basic preemption works [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":357,"completed":234,"skipped":4394,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
... skipping 3 lines ...
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test substitution in container's args
Jun 23 07:31:26.025: INFO: Waiting up to 5m0s for pod "var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26" in namespace "var-expansion-3382" to be "Succeeded or Failed"
Jun 23 07:31:26.049: INFO: Pod "var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26": Phase="Pending", Reason="", readiness=false. Elapsed: 23.912647ms
Jun 23 07:31:28.054: INFO: Pod "var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029509265s
Jun 23 07:31:30.054: INFO: Pod "var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0295584s
Jun 23 07:31:32.055: INFO: Pod "var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030683112s
STEP: Saw pod success
Jun 23 07:31:32.055: INFO: Pod "var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26" satisfied condition "Succeeded or Failed"
Jun 23 07:31:32.062: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26 container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:31:32.179: INFO: Waiting for pod var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26 to disappear
Jun 23 07:31:32.184: INFO: Pod var-expansion-53009a64-2384-4d25-b9e2-e10093b8ff26 no longer exists
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.279 seconds]
[sig-node] Variable Expansion
test/e2e/common/node/framework.go:23
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":357,"completed":235,"skipped":4417,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should create a PodDisruptionBudget [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] DisruptionController
... skipping 15 lines ...
STEP: Waiting for the pdb to be processed
STEP: Waiting for the pdb to be deleted
[AfterEach] [sig-apps] DisruptionController
  test/e2e/framework/framework.go:187
Jun 23 07:31:36.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-4418" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":357,"completed":236,"skipped":4440,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Watchers
... skipping 9 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  test/e2e/framework/framework.go:187
Jun 23 07:31:39.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9921" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":357,"completed":237,"skipped":4462,"failed":0}
SSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Job
... skipping 29 lines ...
• [SLOW TEST:9.190 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":357,"completed":238,"skipped":4468,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Runtime
... skipping 13 lines ...
Jun 23 07:31:52.546: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
Jun 23 07:31:52.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3716" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":357,"completed":239,"skipped":4490,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's cpu request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:31:52.634: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c" in namespace "downward-api-7197" to be "Succeeded or Failed"
Jun 23 07:31:52.648: INFO: Pod "downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.584973ms
Jun 23 07:31:54.654: INFO: Pod "downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c": Phase="Running", Reason="", readiness=false. Elapsed: 2.020353386s
Jun 23 07:31:56.661: INFO: Pod "downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026526204s
STEP: Saw pod success
Jun 23 07:31:56.661: INFO: Pod "downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c" satisfied condition "Succeeded or Failed"
Jun 23 07:31:56.664: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c container client-container: <nil>
STEP: delete the pod
Jun 23 07:31:56.704: INFO: Waiting for pod downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c to disappear
Jun 23 07:31:56.707: INFO: Pod downwardapi-volume-c1977816-802d-400e-b6ec-4edda27d954c no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 07:31:56.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7197" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":357,"completed":240,"skipped":4500,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 10 lines ...
Jun 23 07:31:56.761: INFO: Waiting up to 5m0s for pod "server-envvars-99cd56bf-3303-4e89-b238-f888bea2a2a3" in namespace "pods-4603" to be "running and ready"
Jun 23 07:31:56.767: INFO: Pod "server-envvars-99cd56bf-3303-4e89-b238-f888bea2a2a3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.236783ms
Jun 23 07:31:56.767: INFO: The phase of Pod server-envvars-99cd56bf-3303-4e89-b238-f888bea2a2a3 is Pending, waiting for it to be Running (with Ready = true)
Jun 23 07:31:58.772: INFO: Pod "server-envvars-99cd56bf-3303-4e89-b238-f888bea2a2a3": Phase="Running", Reason="", readiness=true. Elapsed: 2.010476153s
Jun 23 07:31:58.772: INFO: The phase of Pod server-envvars-99cd56bf-3303-4e89-b238-f888bea2a2a3 is Running (Ready = true)
Jun 23 07:31:58.772: INFO: Pod "server-envvars-99cd56bf-3303-4e89-b238-f888bea2a2a3" satisfied condition "running and ready"
Jun 23 07:31:58.810: INFO: Waiting up to 5m0s for pod "client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f" in namespace "pods-4603" to be "Succeeded or Failed"
Jun 23 07:31:58.831: INFO: Pod "client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.851912ms
Jun 23 07:32:00.836: INFO: Pod "client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026128228s
Jun 23 07:32:02.835: INFO: Pod "client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025239652s
STEP: Saw pod success
Jun 23 07:32:02.835: INFO: Pod "client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f" satisfied condition "Succeeded or Failed"
Jun 23 07:32:02.838: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f container env3cont: <nil>
STEP: delete the pod
Jun 23 07:32:02.856: INFO: Waiting for pod client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f to disappear
Jun 23 07:32:02.860: INFO: Pod client-envvars-a8cb7a76-07d0-441d-aaf1-98af82383e2f no longer exists
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.150 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should contain environment variables for services [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":357,"completed":241,"skipped":4512,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 19 lines ...
• [SLOW TEST:16.155 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":357,"completed":242,"skipped":4516,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:32:19.066: INFO: Waiting up to 5m0s for pod "downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7" in namespace "projected-9116" to be "Succeeded or Failed"
Jun 23 07:32:19.077: INFO: Pod "downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.217826ms
Jun 23 07:32:21.083: INFO: Pod "downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01709217s
Jun 23 07:32:23.082: INFO: Pod "downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016059834s
STEP: Saw pod success
Jun 23 07:32:23.082: INFO: Pod "downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7" satisfied condition "Succeeded or Failed"
Jun 23 07:32:23.085: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7 container client-container: <nil>
STEP: delete the pod
Jun 23 07:32:23.108: INFO: Waiting for pod downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7 to disappear
Jun 23 07:32:23.112: INFO: Pod downwardapi-volume-605432ad-67ad-4e90-992d-fd8e631baae7 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 07:32:23.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9116" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":243,"skipped":4537,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:8.916 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":357,"completed":244,"skipped":4554,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 12 lines ...
Jun 23 07:32:32.217: INFO: stderr: ""
Jun 23 07:32:32.217: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta2\nbatch/v1\ncertificates.k8s.io/v1\ncloud.google.com/v1\ncloud.google.com/v1beta1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\nevents.k8s.io/v1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nmetrics.k8s.io/v1beta1\nnetworking.gke.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\npolicy/v1\nrbac.authorization.k8s.io/v1\nscalingpolicy.kope.io/v1alpha1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 07:32:32.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6176" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":357,"completed":245,"skipped":4570,"failed":0}
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 48 lines ...
test/e2e/scheduling/framework.go:40
  PreemptionExecutionPath
  test/e2e/scheduling/preemption.go:458
    runs ReplicaSets to verify preemption running path [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":357,"completed":246,"skipped":4580,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 35 lines ...
• [SLOW TEST:7.339 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":357,"completed":247,"skipped":4582,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-map-5073dd85-8279-405f-aee1-e011ef7ed1ab
STEP: Creating a pod to test consume secrets
Jun 23 07:34:07.200: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8" in namespace "projected-2006" to be "Succeeded or Failed"
Jun 23 07:34:07.228: INFO: Pod "pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8": Phase="Pending", Reason="", readiness=false. Elapsed: 28.489551ms
Jun 23 07:34:09.234: INFO: Pod "pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034245954s
Jun 23 07:34:11.235: INFO: Pod "pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035564203s
STEP: Saw pod success
Jun 23 07:34:11.235: INFO: Pod "pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8" satisfied condition "Succeeded or Failed"
Jun 23 07:34:11.238: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8 container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:34:11.294: INFO: Waiting for pod pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8 to disappear
Jun 23 07:34:11.298: INFO: Pod pod-projected-secrets-0031b41f-0000-41bf-9da0-544c2f1b74d8 no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 07:34:11.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2006" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":248,"skipped":4660,"failed":0}

------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 91 lines ...
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-htksr webserver-deployment-5fd5c5f98f- deployment-1815  39e69d4b-f63b-41a1-af6c-e4b40c87d860 19676 0 2022-06-23 07:34:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8 0xc004666d00 0xc004666d01}] [] [{kube-controller-manager Update v1 2022-06-23 07:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pcnh9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pcnh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-jjkh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 07:34:19.598: INFO: Pod "webserver-deployment-5fd5c5f98f-jtzcx" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-jtzcx webserver-deployment-5fd5c5f98f- deployment-1815  5301226c-bcb6-432a-ae13-c18eea8896d6 19671 0 2022-06-23 07:34:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8 0xc004666ee0 0xc004666ee1}] [] [{kube-controller-manager Update v1 2022-06-23 07:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9572p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9572p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-jjkh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 07:34:19.598: INFO: Pod "webserver-deployment-5fd5c5f98f-p5bfb" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-p5bfb webserver-deployment-5fd5c5f98f- deployment-1815  ba397898-cb7a-4cd8-a09f-8e9ab8542e85 19682 0 2022-06-23 07:34:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8 0xc004667060 0xc004667061}] [] [{kube-controller-manager Update v1 2022-06-23 07:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:34:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tkdv7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tkdv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-h59d,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:,StartTime:2022-06-23 07:34:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 07:34:19.598: INFO: Pod "webserver-deployment-5fd5c5f98f-r6l25" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-r6l25 webserver-deployment-5fd5c5f98f- deployment-1815  4acbeea5-c5e5-42a6-8ae1-ec08e16d35be 19623 0 2022-06-23 07:34:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8 0xc004667230 0xc004667231}] [] [{kube-controller-manager Update v1 2022-06-23 07:34:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:34:17 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.2.149\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kjxb9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjxb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-h59d,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:15 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:10.64.2.149,StartTime:2022-06-23 07:34:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.2.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 07:34:19.599: INFO: Pod "webserver-deployment-5fd5c5f98f-rf4jk" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-rf4jk webserver-deployment-5fd5c5f98f- deployment-1815  268111ca-dbe1-4ac8-bf09-ac8418601245 19688 0 2022-06-23 07:34:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8 0xc004667450 0xc004667451}] [] [{kube-controller-manager Update v1 2022-06-23 07:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:34:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x674r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x674r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-jjkh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2022-06-23 07:34:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 07:34:19.599: INFO: Pod "webserver-deployment-5fd5c5f98f-xvzbp" is not available:
&Pod{ObjectMeta:{webserver-deployment-5fd5c5f98f-xvzbp webserver-deployment-5fd5c5f98f- deployment-1815  1c20b36e-38a2-48a1-bb44-e2ceed60e0d6 19685 0 2022-06-23 07:34:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5fd5c5f98f] map[] [{apps/v1 ReplicaSet webserver-deployment-5fd5c5f98f ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8 0xc004667620 0xc004667621}] [] [{kube-controller-manager Update v1 2022-06-23 07:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba04a1ce-0555-4318-b9a1-a6c2ecb6b0f8\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:34:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tdcm4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdcm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-qsw7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:,StartTime:2022-06-23 07:34:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jun 23 07:34:19.599: INFO: Pod "webserver-deployment-68c48f9ff9-4k2pn" is not available:
&Pod{ObjectMeta:{webserver-deployment-68c48f9ff9-4k2pn webserver-deployment-68c48f9ff9- deployment-1815  7d6dd1b1-88c3-410d-88b0-029615557043 19691 0 2022-06-23 07:34:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:68c48f9ff9] map[] [{apps/v1 ReplicaSet webserver-deployment-68c48f9ff9 cf1da4c8-9781-457a-9f74-9e2956473941 0xc0046677f0 0xc0046677f1}] [] [{kube-controller-manager Update v1 2022-06-23 07:34:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"cf1da4c8-9781-457a-9f74-9e2956473941\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:34:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dfqrq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dfqrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-qsw7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:34:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:,StartTime:2022-06-23 07:34:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 43 lines ...
• [SLOW TEST:8.306 seconds]
[sig-apps] Deployment
test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":357,"completed":249,"skipped":4660,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Subpath
... skipping 7 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with projected pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-projected-vq8t
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 07:34:19.681: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-vq8t" in namespace "subpath-5518" to be "Succeeded or Failed"
Jun 23 07:34:19.687: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Pending", Reason="", readiness=false. Elapsed: 5.075268ms
Jun 23 07:34:21.692: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0101103s
Jun 23 07:34:23.691: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=true. Elapsed: 4.009902788s
Jun 23 07:34:25.699: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=true. Elapsed: 6.017142373s
Jun 23 07:34:27.693: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=true. Elapsed: 8.011374212s
Jun 23 07:34:29.692: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=true. Elapsed: 10.010283387s
... skipping 3 lines ...
Jun 23 07:34:37.692: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=true. Elapsed: 18.010520657s
Jun 23 07:34:39.693: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=true. Elapsed: 20.011096369s
Jun 23 07:34:41.692: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=true. Elapsed: 22.010216986s
Jun 23 07:34:43.693: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Running", Reason="", readiness=false. Elapsed: 24.011727033s
Jun 23 07:34:45.693: INFO: Pod "pod-subpath-test-projected-vq8t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.010991992s
STEP: Saw pod success
Jun 23 07:34:45.693: INFO: Pod "pod-subpath-test-projected-vq8t" satisfied condition "Succeeded or Failed"
Jun 23 07:34:45.696: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod pod-subpath-test-projected-vq8t container test-container-subpath-projected-vq8t: <nil>
STEP: delete the pod
Jun 23 07:34:45.735: INFO: Waiting for pod pod-subpath-test-projected-vq8t to disappear
Jun 23 07:34:45.741: INFO: Pod pod-subpath-test-projected-vq8t no longer exists
STEP: Deleting pod pod-subpath-test-projected-vq8t
Jun 23 07:34:45.741: INFO: Deleting pod "pod-subpath-test-projected-vq8t" in namespace "subpath-5518"
... skipping 7 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with projected pod [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Conformance]","total":357,"completed":250,"skipped":4679,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jun 23 07:34:45.794: INFO: Waiting up to 5m0s for pod "pod-e921a7c1-3c2f-453e-b16e-106d2e39f527" in namespace "emptydir-515" to be "Succeeded or Failed"
Jun 23 07:34:45.802: INFO: Pod "pod-e921a7c1-3c2f-453e-b16e-106d2e39f527": Phase="Pending", Reason="", readiness=false. Elapsed: 7.878317ms
Jun 23 07:34:47.806: INFO: Pod "pod-e921a7c1-3c2f-453e-b16e-106d2e39f527": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01178894s
Jun 23 07:34:49.808: INFO: Pod "pod-e921a7c1-3c2f-453e-b16e-106d2e39f527": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013102566s
STEP: Saw pod success
Jun 23 07:34:49.808: INFO: Pod "pod-e921a7c1-3c2f-453e-b16e-106d2e39f527" satisfied condition "Succeeded or Failed"
Jun 23 07:34:49.810: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-e921a7c1-3c2f-453e-b16e-106d2e39f527 container test-container: <nil>
STEP: delete the pod
Jun 23 07:34:49.834: INFO: Waiting for pod pod-e921a7c1-3c2f-453e-b16e-106d2e39f527 to disappear
Jun 23 07:34:49.839: INFO: Pod pod-e921a7c1-3c2f-453e-b16e-106d2e39f527 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 07:34:49.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-515" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":251,"skipped":4680,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 3 lines ...
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 07:34:49.899: INFO: created pod
Jun 23 07:34:49.899: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-935" to be "Succeeded or Failed"
Jun 23 07:34:49.905: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.019185ms
Jun 23 07:34:51.909: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009860705s
Jun 23 07:34:53.909: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009827733s
STEP: Saw pod success
Jun 23 07:34:53.909: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Jun 23 07:35:23.909: INFO: polling logs
Jun 23 07:35:23.917: INFO: Pod logs: 
I0623 07:34:50.842977       1 log.go:195] OK: Got token
I0623 07:34:50.843020       1 log.go:195] validating with in-cluster discovery
I0623 07:34:50.843473       1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local
I0623 07:34:50.843519       1 log.go:195] Full, not-validated claims: 
... skipping 13 lines ...
• [SLOW TEST:34.084 seconds]
[sig-auth] ServiceAccounts
test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":357,"completed":252,"skipped":4696,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] CronJob 
  should not schedule jobs when suspended [Slow] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] CronJob
... skipping 18 lines ...
• [SLOW TEST:300.063 seconds]
[sig-apps] CronJob
test/e2e/apps/framework.go:23
  should not schedule jobs when suspended [Slow] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":357,"completed":253,"skipped":4704,"failed":0}
SSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should guarantee kube-root-ca.crt exist in any namespace [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 13 lines ...
STEP: waiting for the root ca configmap reconciled
Jun 23 07:40:25.051: INFO: Reconciled root ca configmap in namespace "svcaccounts-3734"
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
Jun 23 07:40:25.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3734" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":357,"completed":254,"skipped":4714,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should patch a secret [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Secrets
... skipping 11 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-node] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:40:25.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6499" for this suite.
•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":357,"completed":255,"skipped":4725,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 243 lines ...
• [SLOW TEST:304.298 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":357,"completed":256,"skipped":4763,"failed":0}
SSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 07:45:29.471: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap that has name configmap-test-emptyKey-ed3a5c46-05b6-4086-93f4-c274c9e3d393
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 07:45:29.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7529" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":357,"completed":257,"skipped":4766,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 13 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 07:45:29.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2067" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":357,"completed":258,"skipped":4772,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-64cd161f-583c-447a-9d48-366754f38618
STEP: Creating a pod to test consume secrets
Jun 23 07:45:29.639: INFO: Waiting up to 5m0s for pod "pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe" in namespace "secrets-7830" to be "Succeeded or Failed"
Jun 23 07:45:29.642: INFO: Pod "pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.718383ms
Jun 23 07:45:31.646: INFO: Pod "pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007542556s
Jun 23 07:45:33.648: INFO: Pod "pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009145441s
STEP: Saw pod success
Jun 23 07:45:33.648: INFO: Pod "pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe" satisfied condition "Succeeded or Failed"
Jun 23 07:45:33.651: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:45:33.691: INFO: Waiting for pod pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe to disappear
Jun 23 07:45:33.695: INFO: Pod pod-secrets-2d9d8fd0-665f-4d12-8bc8-75a184392bfe no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:45:33.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7830" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":357,"completed":259,"skipped":4859,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 35 lines ...
• [SLOW TEST:13.865 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":357,"completed":260,"skipped":4878,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] LimitRange
... skipping 38 lines ...
• [SLOW TEST:7.307 seconds]
[sig-scheduling] LimitRange
test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":357,"completed":261,"skipped":4903,"failed":0}
SSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Kubelet
... skipping 14 lines ...
Jun 23 07:45:56.967: INFO: The phase of Pod busybox-host-aliases66e4cb7c-51ee-46b1-9cb8-5a6e437474df is Running (Ready = true)
Jun 23 07:45:56.967: INFO: Pod "busybox-host-aliases66e4cb7c-51ee-46b1-9cb8-5a6e437474df" satisfied condition "running and ready"
[AfterEach] [sig-node] Kubelet
  test/e2e/framework/framework.go:187
Jun 23 07:45:56.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7401" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":262,"skipped":4908,"failed":0}
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected secret
... skipping 19 lines ...
STEP: Creating secret with name s-test-opt-create-64037a30-4337-40e6-856b-8b5236f40dd5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 07:46:01.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-387" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":357,"completed":263,"skipped":4910,"failed":0}

------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-ec32fa35-1aff-4fd7-8320-8e2ce30c29c6
STEP: Creating a pod to test consume secrets
Jun 23 07:46:01.326: INFO: Waiting up to 5m0s for pod "pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba" in namespace "secrets-7870" to be "Succeeded or Failed"
Jun 23 07:46:01.334: INFO: Pod "pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba": Phase="Pending", Reason="", readiness=false. Elapsed: 7.409698ms
Jun 23 07:46:03.340: INFO: Pod "pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013554482s
Jun 23 07:46:05.341: INFO: Pod "pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015216113s
STEP: Saw pod success
Jun 23 07:46:05.341: INFO: Pod "pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba" satisfied condition "Succeeded or Failed"
Jun 23 07:46:05.352: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:46:05.382: INFO: Waiting for pod pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba to disappear
Jun 23 07:46:05.386: INFO: Pod pod-secrets-d8fa07f0-8911-4e6a-b0b3-55fb976878ba no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:46:05.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7870" for this suite.
STEP: Destroying namespace "secret-namespace-4028" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":357,"completed":264,"skipped":4910,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 18 lines ...
Jun 23 07:46:06.038: INFO: stderr: ""
Jun 23 07:46:06.038: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 07:46:06.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6418" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":357,"completed":265,"skipped":4933,"failed":0}
SSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Kubelet
... skipping 10 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-node] Kubelet
  test/e2e/framework/framework.go:187
Jun 23 07:46:06.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-368" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":357,"completed":266,"skipped":4937,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 21 lines ...
• [SLOW TEST:16.185 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":357,"completed":267,"skipped":4940,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  Replicaset should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicaSet
... skipping 21 lines ...
• [SLOW TEST:5.206 seconds]
[sig-apps] ReplicaSet
test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":357,"completed":268,"skipped":4955,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] DNS
... skipping 33 lines ...
Jun 23 07:46:31.879: INFO: Pod "dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6": Phase="Running", Reason="", readiness=true. Elapsed: 2.024698195s
Jun 23 07:46:31.879: INFO: Pod "dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6" satisfied condition "running"
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jun 23 07:46:31.898: INFO: File jessie_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:31.898: INFO: Lookups using dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 failed for: [jessie_udp@dns-test-service-3.dns-293.svc.cluster.local]

Jun 23 07:46:36.914: INFO: File wheezy_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:36.923: INFO: File jessie_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:36.923: INFO: Lookups using dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 failed for: [wheezy_udp@dns-test-service-3.dns-293.svc.cluster.local jessie_udp@dns-test-service-3.dns-293.svc.cluster.local]

Jun 23 07:46:41.914: INFO: File jessie_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:41.914: INFO: Lookups using dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 failed for: [jessie_udp@dns-test-service-3.dns-293.svc.cluster.local]

Jun 23 07:46:46.906: INFO: File wheezy_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:46.911: INFO: File jessie_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:46.911: INFO: Lookups using dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 failed for: [wheezy_udp@dns-test-service-3.dns-293.svc.cluster.local jessie_udp@dns-test-service-3.dns-293.svc.cluster.local]

Jun 23 07:46:51.908: INFO: File wheezy_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:51.918: INFO: Lookups using dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 failed for: [wheezy_udp@dns-test-service-3.dns-293.svc.cluster.local]

Jun 23 07:46:56.912: INFO: File jessie_udp@dns-test-service-3.dns-293.svc.cluster.local from pod  dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jun 23 07:46:56.912: INFO: Lookups using dns-293/dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 failed for: [jessie_udp@dns-test-service-3.dns-293.svc.cluster.local]

Jun 23 07:47:01.937: INFO: DNS probes using dns-test-6957511e-349a-4064-892e-7afd7e8aa1f6 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-293.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-293.svc.cluster.local; sleep 1; done
... skipping 20 lines ...
• [SLOW TEST:36.515 seconds]
[sig-network] DNS
test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":357,"completed":269,"skipped":4966,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected secret
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating projection with secret that has name projected-secret-test-561c6463-0b8f-4b61-af4f-e07b9646533d
STEP: Creating a pod to test consume secrets
Jun 23 07:47:04.220: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad" in namespace "projected-4577" to be "Succeeded or Failed"
Jun 23 07:47:04.229: INFO: Pod "pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.770491ms
Jun 23 07:47:06.235: INFO: Pod "pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014812542s
Jun 23 07:47:08.234: INFO: Pod "pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013848966s
STEP: Saw pod success
Jun 23 07:47:08.234: INFO: Pod "pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad" satisfied condition "Succeeded or Failed"
Jun 23 07:47:08.237: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad container projected-secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:47:08.266: INFO: Waiting for pod pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad to disappear
Jun 23 07:47:08.269: INFO: Pod pod-projected-secrets-a31fc6b3-ab7b-41d8-8c2c-97e5f6174cad no longer exists
[AfterEach] [sig-storage] Projected secret
  test/e2e/framework/framework.go:187
Jun 23 07:47:08.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4577" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":270,"skipped":5010,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 19 lines ...
STEP: Creating configMap with name cm-test-opt-create-9af370c9-51c1-4983-8a09-b168991aedd4
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 07:47:12.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1243" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":357,"completed":271,"skipped":5024,"failed":0}
S
------------------------------
[sig-node] Containers 
  should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Containers
... skipping 3 lines ...
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test override command
Jun 23 07:47:12.521: INFO: Waiting up to 5m0s for pod "client-containers-42c642a6-152f-4368-81a9-5459fba286bb" in namespace "containers-8986" to be "Succeeded or Failed"
Jun 23 07:47:12.527: INFO: Pod "client-containers-42c642a6-152f-4368-81a9-5459fba286bb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.911257ms
Jun 23 07:47:14.533: INFO: Pod "client-containers-42c642a6-152f-4368-81a9-5459fba286bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012431371s
Jun 23 07:47:16.530: INFO: Pod "client-containers-42c642a6-152f-4368-81a9-5459fba286bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009733202s
STEP: Saw pod success
Jun 23 07:47:16.530: INFO: Pod "client-containers-42c642a6-152f-4368-81a9-5459fba286bb" satisfied condition "Succeeded or Failed"
Jun 23 07:47:16.535: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod client-containers-42c642a6-152f-4368-81a9-5459fba286bb container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:47:16.572: INFO: Waiting for pod client-containers-42c642a6-152f-4368-81a9-5459fba286bb to disappear
Jun 23 07:47:16.578: INFO: Pod client-containers-42c642a6-152f-4368-81a9-5459fba286bb no longer exists
[AfterEach] [sig-node] Containers
  test/e2e/framework/framework.go:187
Jun 23 07:47:16.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8986" for this suite.
•{"msg":"PASSED [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]","total":357,"completed":272,"skipped":5025,"failed":0}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory request [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:47:16.633: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94" in namespace "projected-8002" to be "Succeeded or Failed"
Jun 23 07:47:16.638: INFO: Pod "downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94": Phase="Pending", Reason="", readiness=false. Elapsed: 5.459217ms
Jun 23 07:47:18.642: INFO: Pod "downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009251101s
Jun 23 07:47:20.643: INFO: Pod "downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009754292s
STEP: Saw pod success
Jun 23 07:47:20.643: INFO: Pod "downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94" satisfied condition "Succeeded or Failed"
Jun 23 07:47:20.647: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94 container client-container: <nil>
STEP: delete the pod
Jun 23 07:47:20.670: INFO: Waiting for pod downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94 to disappear
Jun 23 07:47:20.674: INFO: Pod downwardapi-volume-ae1c9595-e567-4844-979b-dec763e34a94 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 07:47:20.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8002" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":357,"completed":273,"skipped":5026,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 19 lines ...
Jun 23 07:47:24.758: INFO: The phase of Pod pod-logs-websocket-ce4739a0-3c7b-46a1-a7df-d98b75cba990 is Running (Ready = true)
Jun 23 07:47:24.758: INFO: Pod "pod-logs-websocket-ce4739a0-3c7b-46a1-a7df-d98b75cba990" satisfied condition "running and ready"
[AfterEach] [sig-node] Pods
  test/e2e/framework/framework.go:187
Jun 23 07:47:24.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7246" for this suite.
•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":357,"completed":274,"skipped":5053,"failed":0}
SSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 39 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":357,"completed":275,"skipped":5058,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 35 lines ...
• [SLOW TEST:88.470 seconds]
[sig-node] NoExecuteTaintManager Multiple Pods [Serial]
test/e2e/node/framework.go:23
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":357,"completed":276,"skipped":5070,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:49:01.439: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be" in namespace "downward-api-3244" to be "Succeeded or Failed"
Jun 23 07:49:01.450: INFO: Pod "downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be": Phase="Pending", Reason="", readiness=false. Elapsed: 11.053864ms
Jun 23 07:49:03.455: INFO: Pod "downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015740399s
Jun 23 07:49:05.457: INFO: Pod "downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017365073s
STEP: Saw pod success
Jun 23 07:49:05.457: INFO: Pod "downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be" satisfied condition "Succeeded or Failed"
Jun 23 07:49:05.460: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be container client-container: <nil>
STEP: delete the pod
Jun 23 07:49:05.499: INFO: Waiting for pod downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be to disappear
Jun 23 07:49:05.504: INFO: Pod downwardapi-volume-4b593b56-f7e5-430b-9054-b0c2c10133be no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 07:49:05.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3244" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":357,"completed":277,"skipped":5087,"failed":0}

------------------------------
[sig-node] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
... skipping 3 lines ...
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test env composition
Jun 23 07:49:05.557: INFO: Waiting up to 5m0s for pod "var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22" in namespace "var-expansion-3609" to be "Succeeded or Failed"
Jun 23 07:49:05.563: INFO: Pod "var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22": Phase="Pending", Reason="", readiness=false. Elapsed: 5.625419ms
Jun 23 07:49:07.571: INFO: Pod "var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013334386s
Jun 23 07:49:09.571: INFO: Pod "var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013279555s
STEP: Saw pod success
Jun 23 07:49:09.571: INFO: Pod "var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22" satisfied condition "Succeeded or Failed"
Jun 23 07:49:09.580: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22 container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:49:09.628: INFO: Waiting for pod var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22 to disappear
Jun 23 07:49:09.635: INFO: Pod var-expansion-a5447e0f-a388-4090-b4a3-0aba7da57a22 no longer exists
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
Jun 23 07:49:09.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3609" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":357,"completed":278,"skipped":5087,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Watchers
... skipping 30 lines ...
• [SLOW TEST:10.154 seconds]
[sig-api-machinery] Watchers
test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":357,"completed":279,"skipped":5093,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should block an eviction until the PDB is updated to allow it [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] DisruptionController
... skipping 36 lines ...
• [SLOW TEST:6.489 seconds]
[sig-apps] DisruptionController
test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":357,"completed":280,"skipped":5106,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating secret with name secret-test-map-342b8e76-3961-4857-aaab-14b921325ff7
STEP: Creating a pod to test consume secrets
Jun 23 07:49:26.483: INFO: Waiting up to 5m0s for pod "pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb" in namespace "secrets-5131" to be "Succeeded or Failed"
Jun 23 07:49:26.513: INFO: Pod "pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb": Phase="Pending", Reason="", readiness=false. Elapsed: 30.339441ms
Jun 23 07:49:28.533: INFO: Pod "pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049680041s
Jun 23 07:49:30.520: INFO: Pod "pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03645609s
STEP: Saw pod success
Jun 23 07:49:30.520: INFO: Pod "pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb" satisfied condition "Succeeded or Failed"
Jun 23 07:49:30.524: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb container secret-volume-test: <nil>
STEP: delete the pod
Jun 23 07:49:30.558: INFO: Waiting for pod pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb to disappear
Jun 23 07:49:30.568: INFO: Pod pod-secrets-c29638e5-b149-48f9-bda1-5432c9bf34fb no longer exists
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 07:49:30.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5131" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":281,"skipped":5114,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Networking
... skipping 81 lines ...
test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  test/e2e/common/network/networking.go:32
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":357,"completed":282,"skipped":5152,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 18 lines ...
• [SLOW TEST:24.263 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":357,"completed":283,"skipped":5157,"failed":0}
SSSSSSS
------------------------------
[sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] PreStop
... skipping 41 lines ...
• [SLOW TEST:9.168 seconds]
[sig-node] PreStop
test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":357,"completed":284,"skipped":5164,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide container's memory limit [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 07:50:28.646: INFO: Waiting up to 5m0s for pod "downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6" in namespace "projected-1165" to be "Succeeded or Failed"
Jun 23 07:50:28.662: INFO: Pod "downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.177157ms
Jun 23 07:50:30.809: INFO: Pod "downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163200921s
Jun 23 07:50:32.668: INFO: Pod "downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02160794s
STEP: Saw pod success
Jun 23 07:50:32.668: INFO: Pod "downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6" satisfied condition "Succeeded or Failed"
Jun 23 07:50:32.670: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6 container client-container: <nil>
STEP: delete the pod
Jun 23 07:50:32.698: INFO: Waiting for pod downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6 to disappear
Jun 23 07:50:32.701: INFO: Pod downwardapi-volume-07832a00-409e-4490-ade9-c27f3ef2d8e6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 07:50:32.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1165" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":357,"completed":285,"skipped":5186,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 139 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:187
Jun 23 07:50:36.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1213" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:83
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":357,"completed":286,"skipped":5214,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should complete a service status lifecycle [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 43 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:187
Jun 23 07:50:36.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7252" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762
•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":357,"completed":287,"skipped":5231,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 7 lines ...
  test/e2e/framework/framework.go:647
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
Jun 23 07:51:36.836: INFO: Unexpected error: 
    <*errors.errorString | 0xc0002082a0>: {
        s: "timed out waiting for the condition",
    }
Jun 23 07:51:36.836: FAIL: timed out waiting for the condition

Full Stack Trace
k8s.io/kubernetes/test/e2e/apimachinery.ensureServicesAreRemovedWhenNamespaceIsDeleted(0xc000b89b80)
	test/e2e/apimachinery/namespace.go:181 +0x865
k8s.io/kubernetes/test/e2e/apimachinery.glob..func16.2()
	test/e2e/apimachinery/namespace.go:246 +0x1d
... skipping 238 lines ...
  test/e2e/framework/framework.go:647

  Jun 23 07:51:36.836: timed out waiting for the condition

  test/e2e/apimachinery/namespace.go:181
------------------------------
{"msg":"FAILED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":357,"completed":287,"skipped":5248,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected configMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name projected-configmap-test-volume-map-a12ca174-0b44-4d08-a61b-831cc0d042d8
STEP: Creating a pod to test consume configMaps
Jun 23 07:51:38.362: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2" in namespace "projected-7106" to be "Succeeded or Failed"
Jun 23 07:51:38.371: INFO: Pod "pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.878004ms
Jun 23 07:51:40.375: INFO: Pod "pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013294061s
Jun 23 07:51:42.376: INFO: Pod "pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013882706s
STEP: Saw pod success
Jun 23 07:51:42.376: INFO: Pod "pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2" satisfied condition "Succeeded or Failed"
Jun 23 07:51:42.378: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2 container agnhost-container: <nil>
STEP: delete the pod
Jun 23 07:51:42.398: INFO: Waiting for pod pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2 to disappear
Jun 23 07:51:42.403: INFO: Pod pod-projected-configmaps-4e443ee5-e618-4497-a084-33ed42b14ad2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  test/e2e/framework/framework.go:187
Jun 23 07:51:42.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7106" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":357,"completed":288,"skipped":5267,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  test/e2e/framework/framework.go:187
Jun 23 07:51:43.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7151" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":357,"completed":289,"skipped":5271,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 161 lines ...
Jun 23 07:51:44.871: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-4203 create -f -'
Jun 23 07:51:45.168: INFO: stderr: ""
Jun 23 07:51:45.168: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Jun 23 07:51:45.168: INFO: Waiting for all frontend pods to be Running.
Jun 23 07:51:50.220: INFO: Waiting for frontend to serve content.
Jun 23 07:51:51.402: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Jun 23 07:51:56.436: INFO: Trying to add a new entry to the guestbook.
Jun 23 07:51:56.449: INFO: Verifying that added entry can be retrieved.
Jun 23 07:51:56.466: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""}
STEP: using delete to clean up resources
Jun 23 07:52:01.497: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-4203 delete --grace-period=0 --force -f -'
Jun 23 07:52:01.647: INFO: stderr: "Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jun 23 07:52:01.647: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
STEP: using delete to clean up resources
Jun 23 07:52:01.647: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=kubectl-4203 delete --grace-period=0 --force -f -'
... skipping 25 lines ...
test/e2e/kubectl/framework.go:23
  Guestbook application
  test/e2e/kubectl/kubectl.go:367
    should create and stop a working application  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":357,"completed":290,"skipped":5289,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  test/e2e/framework/framework.go:187
Jun 23 07:52:06.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6788" for this suite.
STEP: Destroying namespace "webhook-6788-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":357,"completed":291,"skipped":5292,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 32 lines ...
• [SLOW TEST:6.259 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":357,"completed":292,"skipped":5319,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 27 lines ...
Jun 23 07:52:15.456: INFO: Selector matched 1 pods for map[app:agnhost]
Jun 23 07:52:15.456: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 07:52:15.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8211" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":357,"completed":293,"skipped":5336,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 16 lines ...
• [SLOW TEST:5.264 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":357,"completed":294,"skipped":5344,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 50 lines ...
STEP: Deleting pod pod1 in namespace services-2097
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2097 to expose endpoints map[pod2:[80]]
Jun 23 07:52:30.094: INFO: successfully validated that service endpoint-test2 in namespace services-2097 exposes endpoints map[pod2:[80]]
STEP: Checking if the Service forwards traffic to pod2
Jun 23 07:52:31.094: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2097 exec execpodpfnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jun 23 07:52:33.293: INFO: rc: 1
Jun 23 07:52:33.293: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2097 exec execpodpfnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 07:52:34.294: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2097 exec execpodpfnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Jun 23 07:52:34.483: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n"
Jun 23 07:52:34.483: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun 23 07:52:34.483: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2097 exec execpodpfnqr -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.104.206 80'
... skipping 12 lines ...
• [SLOW TEST:14.030 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":357,"completed":295,"skipped":5345,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Security Context
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  test/e2e/common/node/security_context.go:48
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
Jun 23 07:52:34.809: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-994698d1-15bb-4a6e-a337-ab8b33dc962a" in namespace "security-context-test-3639" to be "Succeeded or Failed"
Jun 23 07:52:34.814: INFO: Pod "busybox-readonly-false-994698d1-15bb-4a6e-a337-ab8b33dc962a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.988877ms
Jun 23 07:52:36.819: INFO: Pod "busybox-readonly-false-994698d1-15bb-4a6e-a337-ab8b33dc962a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009450404s
Jun 23 07:52:38.819: INFO: Pod "busybox-readonly-false-994698d1-15bb-4a6e-a337-ab8b33dc962a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.00943248s
Jun 23 07:52:38.819: INFO: Pod "busybox-readonly-false-994698d1-15bb-4a6e-a337-ab8b33dc962a" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  test/e2e/framework/framework.go:187
Jun 23 07:52:38.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3639" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":357,"completed":296,"skipped":5369,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] StatefulSet
... skipping 17 lines ...
Jun 23 07:52:38.909: INFO: Pod "test-pod": Phase="Pending", Reason="", readiness=false. Elapsed: 13.327753ms
Jun 23 07:52:40.914: INFO: Pod "test-pod": Phase="Running", Reason="", readiness=true. Elapsed: 2.018678969s
Jun 23 07:52:40.914: INFO: Pod "test-pod" satisfied condition "running"
STEP: Creating statefulset with conflicting port in namespace statefulset-719
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-719
Jun 23 07:52:40.942: INFO: Observed stateful pod in namespace: statefulset-719, name: ss-0, uid: f97d5f19-2cb3-46ae-90bf-daf7a88105fe, status phase: Pending. Waiting for statefulset controller to delete.
Jun 23 07:52:41.005: INFO: Observed stateful pod in namespace: statefulset-719, name: ss-0, uid: f97d5f19-2cb3-46ae-90bf-daf7a88105fe, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 07:52:41.022: INFO: Observed stateful pod in namespace: statefulset-719, name: ss-0, uid: f97d5f19-2cb3-46ae-90bf-daf7a88105fe, status phase: Failed. Waiting for statefulset controller to delete.
Jun 23 07:52:41.032: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-719
STEP: Removing pod with conflicting port in namespace statefulset-719
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-719 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:122
Jun 23 07:52:43.116: INFO: Deleting all statefulset in ns statefulset-719
... skipping 10 lines ...
test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  test/e2e/apps/statefulset.go:101
    Should recreate evicted statefulset [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":357,"completed":297,"skipped":5372,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jun 23 07:52:53.393: INFO: Waiting up to 5m0s for pod "pod-89c9e364-fc8f-4552-b7ad-03d2487e330a" in namespace "emptydir-1982" to be "Succeeded or Failed"
Jun 23 07:52:53.399: INFO: Pod "pod-89c9e364-fc8f-4552-b7ad-03d2487e330a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.418485ms
Jun 23 07:52:55.404: INFO: Pod "pod-89c9e364-fc8f-4552-b7ad-03d2487e330a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01096785s
Jun 23 07:52:57.404: INFO: Pod "pod-89c9e364-fc8f-4552-b7ad-03d2487e330a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010896562s
STEP: Saw pod success
Jun 23 07:52:57.404: INFO: Pod "pod-89c9e364-fc8f-4552-b7ad-03d2487e330a" satisfied condition "Succeeded or Failed"
Jun 23 07:52:57.407: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-89c9e364-fc8f-4552-b7ad-03d2487e330a container test-container: <nil>
STEP: delete the pod
Jun 23 07:52:57.431: INFO: Waiting for pod pod-89c9e364-fc8f-4552-b7ad-03d2487e330a to disappear
Jun 23 07:52:57.434: INFO: Pod pod-89c9e364-fc8f-4552-b7ad-03d2487e330a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 07:52:57.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1982" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":298,"skipped":5400,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes control plane services is included in cluster-info  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 12 lines ...
Jun 23 07:52:57.541: INFO: stderr: ""
Jun 23 07:52:57.541: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://35.202.0.82\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  test/e2e/framework/framework.go:187
Jun 23 07:52:57.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8504" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":357,"completed":299,"skipped":5424,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Kubelet
... skipping 10 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[AfterEach] [sig-node] Kubelet
  test/e2e/framework/framework.go:187
Jun 23 07:53:01.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4669" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":357,"completed":300,"skipped":5444,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a volume subpath [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Variable Expansion
... skipping 3 lines ...
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test substitution in volume subpath
Jun 23 07:53:01.682: INFO: Waiting up to 5m0s for pod "var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd" in namespace "var-expansion-4360" to be "Succeeded or Failed"
Jun 23 07:53:01.694: INFO: Pod "var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.447453ms
Jun 23 07:53:03.698: INFO: Pod "var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015755712s
Jun 23 07:53:05.698: INFO: Pod "var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016038847s
STEP: Saw pod success
Jun 23 07:53:05.698: INFO: Pod "var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd" satisfied condition "Succeeded or Failed"
Jun 23 07:53:05.701: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:53:05.724: INFO: Waiting for pod var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd to disappear
Jun 23 07:53:05.727: INFO: Pod var-expansion-7a3926d3-7c7b-4139-a092-088ec9d6d2fd no longer exists
[AfterEach] [sig-node] Variable Expansion
  test/e2e/framework/framework.go:187
Jun 23 07:53:05.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4360" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":357,"completed":301,"skipped":5463,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
S
------------------------------
[sig-apps] Deployment 
  should validate Deployment Status endpoints [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Deployment
... skipping 63 lines ...
Jun 23 07:53:07.881: INFO: Pod "test-deployment-9r7k2-6465649447-slcmx" is available:
&Pod{ObjectMeta:{test-deployment-9r7k2-6465649447-slcmx test-deployment-9r7k2-6465649447- deployment-2530  21e5ac2a-ee69-4c81-b152-32fcf905b03b 23576 0 2022-06-23 07:53:05 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:6465649447] map[] [{apps/v1 ReplicaSet test-deployment-9r7k2-6465649447 de1a9809-975c-428b-abe6-af5483971f13 0xc004602b80 0xc004602b81}] [] [{kube-controller-manager Update v1 2022-06-23 07:53:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"de1a9809-975c-428b-abe6-af5483971f13\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-06-23 07:53:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.0.97\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wzmnd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wzmnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-d118eff5-f2b9-minion-group-jjkh,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:53:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:53:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:53:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-06-23 07:53:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:10.64.0.97,StartTime:2022-06-23 07:53:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-06-23 07:53:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:registry.k8s.io/e2e-test-images/httpd:2.4.38-2,ImageID:registry.k8s.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://471f8d371614eb05dd2e2e1704bbd8520edd6f3d5d25fced81c1f48ff4ca97a7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.0.97,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  test/e2e/framework/framework.go:187
Jun 23 07:53:07.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2530" for this suite.
•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":357,"completed":302,"skipped":5464,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:7.737 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":357,"completed":303,"skipped":5477,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Jun 23 07:53:15.890: INFO: created pod pod-service-account-nomountsa-nomountspec
Jun 23 07:53:15.890: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  test/e2e/framework/framework.go:187
Jun 23 07:53:15.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-2450" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":357,"completed":304,"skipped":5479,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 103 lines ...
• [SLOW TEST:68.790 seconds]
[sig-storage] EmptyDir wrapper volumes
test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":357,"completed":305,"skipped":5489,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:22.171 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":357,"completed":306,"skipped":5496,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Runtime
... skipping 32 lines ...
  test/e2e/common/node/runtime.go:43
    when starting a container that exits
    test/e2e/common/node/runtime.go:44
      should run with the expected status [NodeConformance] [Conformance]
      test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":357,"completed":307,"skipped":5535,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Downward API
... skipping 3 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 07:55:15.408: INFO: Waiting up to 5m0s for pod "downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468" in namespace "downward-api-8116" to be "Succeeded or Failed"
Jun 23 07:55:15.415: INFO: Pod "downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468": Phase="Pending", Reason="", readiness=false. Elapsed: 6.941609ms
Jun 23 07:55:17.420: INFO: Pod "downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01142714s
Jun 23 07:55:19.422: INFO: Pod "downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013278737s
STEP: Saw pod success
Jun 23 07:55:19.422: INFO: Pod "downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468" satisfied condition "Succeeded or Failed"
Jun 23 07:55:19.425: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468 container dapi-container: <nil>
STEP: delete the pod
Jun 23 07:55:19.466: INFO: Waiting for pod downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468 to disappear
Jun 23 07:55:19.470: INFO: Pod downward-api-8fd0e6d6-1ebc-47f9-9bc1-355fa9125468 no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
Jun 23 07:55:19.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8116" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":357,"completed":308,"skipped":5540,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 28 lines ...
• [SLOW TEST:135.353 seconds]
[sig-node] NoExecuteTaintManager Single Pod [Serial]
test/e2e/node/framework.go:23
  removing taint cancels eviction [Disruptive] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":357,"completed":309,"skipped":5578,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicationController
... skipping 14 lines ...
Jun 23 07:57:36.286: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  test/e2e/framework/framework.go:187
Jun 23 07:57:36.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1082" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":357,"completed":310,"skipped":5591,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] ReplicationController
... skipping 25 lines ...
• [SLOW TEST:10.192 seconds]
[sig-apps] ReplicationController
test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":357,"completed":311,"skipped":5596,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
S
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected combined
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap with name configmap-projected-all-test-volume-67b84fca-bad9-4920-af96-7b331cd6a8af
STEP: Creating secret with name secret-projected-all-test-volume-5aeb0346-13b9-4d77-852c-e208ebe3236c
STEP: Creating a pod to test Check all projections for projected volume plugin
Jun 23 07:57:46.677: INFO: Waiting up to 5m0s for pod "projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be" in namespace "projected-8229" to be "Succeeded or Failed"
Jun 23 07:57:46.686: INFO: Pod "projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be": Phase="Pending", Reason="", readiness=false. Elapsed: 9.407241ms
Jun 23 07:57:48.690: INFO: Pod "projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013191547s
Jun 23 07:57:50.691: INFO: Pod "projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014751167s
STEP: Saw pod success
Jun 23 07:57:50.691: INFO: Pod "projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be" satisfied condition "Succeeded or Failed"
Jun 23 07:57:50.695: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be container projected-all-volume-test: <nil>
STEP: delete the pod
Jun 23 07:57:50.740: INFO: Waiting for pod projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be to disappear
Jun 23 07:57:50.745: INFO: Pod projected-volume-4c3d0554-f47a-4005-8563-2c8d194c09be no longer exists
[AfterEach] [sig-storage] Projected combined
  test/e2e/framework/framework.go:187
Jun 23 07:57:50.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8229" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":357,"completed":312,"skipped":5597,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 27 lines ...
• [SLOW TEST:16.284 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":357,"completed":313,"skipped":5623,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 39 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute prestop http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":357,"completed":314,"skipped":5645,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:243.806 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":357,"completed":315,"skipped":5664,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
[AfterEach] [sig-network] Services
  test/e2e/framework/framework.go:187
Jun 23 08:02:19.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5638" for this suite.
[AfterEach] [sig-network] Services
  test/e2e/network/service.go:762
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":357,"completed":316,"skipped":5719,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] Certificates API [Privileged:ClusterAdmin] 
  should support CSR API operations [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 26 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 08:02:19.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-2592" for this suite.
•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":357,"completed":317,"skipped":5763,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 23 08:02:19.789: INFO: Waiting up to 5m0s for pod "pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a" in namespace "emptydir-5310" to be "Succeeded or Failed"
Jun 23 08:02:19.808: INFO: Pod "pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.409237ms
Jun 23 08:02:21.814: INFO: Pod "pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025194812s
Jun 23 08:02:23.814: INFO: Pod "pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025064393s
STEP: Saw pod success
Jun 23 08:02:23.814: INFO: Pod "pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a" satisfied condition "Succeeded or Failed"
Jun 23 08:02:23.817: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a container test-container: <nil>
STEP: delete the pod
Jun 23 08:02:23.854: INFO: Waiting for pod pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a to disappear
Jun 23 08:02:23.858: INFO: Pod pod-f6982ee2-5eb9-4954-8c06-c7097a1a563a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 08:02:23.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5310" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":318,"skipped":5785,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSS
------------------------------
[sig-node] Pods 
  should delete a collection of pods [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Pods
... skipping 10 lines ...
STEP: Create set of pods
Jun 23 08:02:23.910: INFO: created test-pod-1
Jun 23 08:02:23.927: INFO: created test-pod-2
Jun 23 08:02:23.956: INFO: created test-pod-3
STEP: waiting for all 3 pods to be running
Jun 23 08:02:23.956: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-6289' to be running and ready
Jun 23 08:02:24.010: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 08:02:24.010: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 08:02:24.010: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Jun 23 08:02:24.010: INFO: 0 / 3 pods in namespace 'pods-6289' are running and ready (0 seconds elapsed)
Jun 23 08:02:24.010: INFO: expected 0 pod replicas in namespace 'pods-6289', 0 are Running and Ready.
Jun 23 08:02:24.010: INFO: POD         NODE                                 PHASE    GRACE  CONDITIONS
Jun 23 08:02:24.010: INFO: test-pod-1  kt2-d118eff5-f2b9-minion-group-jjkh  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC  }]
Jun 23 08:02:24.010: INFO: test-pod-2  kt2-d118eff5-f2b9-minion-group-jjkh  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC  }]
Jun 23 08:02:24.010: INFO: test-pod-3  kt2-d118eff5-f2b9-minion-group-qsw7  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-06-23 08:02:23 +0000 UTC  }]
... skipping 12 lines ...
• [SLOW TEST:5.250 seconds]
[sig-node] Pods
test/e2e/common/node/framework.go:23
  should delete a collection of pods [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":357,"completed":319,"skipped":5789,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0666 on node default medium
Jun 23 08:02:29.190: INFO: Waiting up to 5m0s for pod "pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7" in namespace "emptydir-2609" to be "Succeeded or Failed"
Jun 23 08:02:29.204: INFO: Pod "pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.827089ms
Jun 23 08:02:31.210: INFO: Pod "pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019693627s
Jun 23 08:02:33.209: INFO: Pod "pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019201663s
STEP: Saw pod success
Jun 23 08:02:33.209: INFO: Pod "pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7" satisfied condition "Succeeded or Failed"
Jun 23 08:02:33.212: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7 container test-container: <nil>
STEP: delete the pod
Jun 23 08:02:33.240: INFO: Waiting for pod pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7 to disappear
Jun 23 08:02:33.245: INFO: Pod pod-14b2325d-070d-4f7b-9650-5e9739ae8ba7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 08:02:33.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2609" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":320,"skipped":5866,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 08:02:33.299: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080" in namespace "downward-api-5881" to be "Succeeded or Failed"
Jun 23 08:02:33.308: INFO: Pod "downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080": Phase="Pending", Reason="", readiness=false. Elapsed: 8.906372ms
Jun 23 08:02:35.316: INFO: Pod "downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016180968s
Jun 23 08:02:37.313: INFO: Pod "downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013605879s
STEP: Saw pod success
Jun 23 08:02:37.313: INFO: Pod "downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080" satisfied condition "Succeeded or Failed"
Jun 23 08:02:37.317: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080 container client-container: <nil>
STEP: delete the pod
Jun 23 08:02:37.341: INFO: Waiting for pod downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080 to disappear
Jun 23 08:02:37.354: INFO: Pod downwardapi-volume-c55bc269-2f1f-406c-b68a-043fea2ea080 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 08:02:37.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5881" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":357,"completed":321,"skipped":5880,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:186
STEP: Creating a kubernetes client
Jun 23 08:02:37.381: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 7 lines ...
STEP: Wait for the deployment to be ready
Jun 23 08:02:38.128: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jun 23 08:02:40.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.June, 23, 8, 2, 38, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 8, 2, 38, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.June, 23, 8, 2, 38, 0, time.Local), LastTransitionTime:time.Date(2022, time.June, 23, 8, 2, 38, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f8b6c9658\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jun 23 08:02:43.165: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:647
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 08:02:43.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8561" for this suite.
STEP: Destroying namespace "webhook-8561-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  test/e2e/apimachinery/webhook.go:104

• [SLOW TEST:5.981 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":357,"completed":322,"skipped":5887,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] version v1
... skipping 346 lines ...
test/e2e/network/common/framework.go:23
  version v1
  test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":357,"completed":323,"skipped":5917,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir volume type on node default medium
Jun 23 08:02:50.033: INFO: Waiting up to 5m0s for pod "pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb" in namespace "emptydir-8987" to be "Succeeded or Failed"
Jun 23 08:02:50.039: INFO: Pod "pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb": Phase="Pending", Reason="", readiness=false. Elapsed: 5.503909ms
Jun 23 08:02:52.043: INFO: Pod "pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010007869s
Jun 23 08:02:54.044: INFO: Pod "pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011164222s
STEP: Saw pod success
Jun 23 08:02:54.045: INFO: Pod "pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb" satisfied condition "Succeeded or Failed"
Jun 23 08:02:54.048: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb container test-container: <nil>
STEP: delete the pod
Jun 23 08:02:54.077: INFO: Waiting for pod pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb to disappear
Jun 23 08:02:54.081: INFO: Pod pod-ffeb3d5b-f068-47b1-904f-07c9f7c50abb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
Jun 23 08:02:54.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8987" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":324,"skipped":5918,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 16 lines ...
• [SLOW TEST:7.117 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":357,"completed":325,"skipped":6031,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] EndpointSlice
... skipping 20 lines ...
• [SLOW TEST:30.329 seconds]
[sig-network] EndpointSlice
test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":357,"completed":326,"skipped":6039,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 16 lines ...
Jun 23 08:03:34.721: INFO: Waiting up to 5m0s for pod "execpodn5wj7" in namespace "services-8000" to be "running"
Jun 23 08:03:34.727: INFO: Pod "execpodn5wj7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.350488ms
Jun 23 08:03:36.735: INFO: Pod "execpodn5wj7": Phase="Running", Reason="", readiness=true. Elapsed: 2.014018541s
Jun 23 08:03:36.735: INFO: Pod "execpodn5wj7" satisfied condition "running"
Jun 23 08:03:37.742: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-8000 exec execpodn5wj7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jun 23 08:03:38.937: INFO: rc: 1
Jun 23 08:03:38.937: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-8000 exec execpodn5wj7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 08:03:39.937: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-8000 exec execpodn5wj7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jun 23 08:03:41.177: INFO: rc: 1
Jun 23 08:03:41.178: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-8000 exec execpodn5wj7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ nc -v -t -w 2 externalname-service 80
+ echo hostName
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 08:03:41.937: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-8000 exec execpodn5wj7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Jun 23 08:03:43.159: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Jun 23 08:03:43.160: INFO: stdout: ""
Jun 23 08:03:43.937: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-8000 exec execpodn5wj7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
... skipping 22 lines ...
• [SLOW TEST:14.560 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":357,"completed":327,"skipped":6047,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SS
------------------------------
[sig-node] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 16 lines ...
• [SLOW TEST:60.113 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":357,"completed":328,"skipped":6049,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  test/e2e/common/storage/projected_downwardapi.go:43
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 08:04:46.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec" in namespace "projected-4754" to be "Succeeded or Failed"
Jun 23 08:04:46.289: INFO: Pod "downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec": Phase="Pending", Reason="", readiness=false. Elapsed: 7.427445ms
Jun 23 08:04:48.295: INFO: Pod "downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013143417s
Jun 23 08:04:50.295: INFO: Pod "downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013091763s
STEP: Saw pod success
Jun 23 08:04:50.295: INFO: Pod "downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec" satisfied condition "Succeeded or Failed"
Jun 23 08:04:50.299: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec container client-container: <nil>
STEP: delete the pod
Jun 23 08:04:50.406: INFO: Waiting for pod downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec to disappear
Jun 23 08:04:50.412: INFO: Pod downwardapi-volume-47ffee97-d1c1-4ef4-a785-5fedcbeaebec no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 08:04:50.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4754" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":357,"completed":329,"skipped":6076,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] RuntimeClass 
  should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] RuntimeClass
... skipping 8 lines ...
STEP: Deleting RuntimeClass runtimeclass-5781-delete-me
STEP: Waiting for the RuntimeClass to disappear
[AfterEach] [sig-node] RuntimeClass
  test/e2e/framework/framework.go:187
Jun 23 08:04:50.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-5781" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]","total":357,"completed":330,"skipped":6086,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 16 lines ...
Jun 23 08:04:52.576: INFO: Pod "annotationupdate588f3ac5-19c3-4781-900f-7c31dbaf815b" satisfied condition "running and ready"
Jun 23 08:04:53.101: INFO: Successfully updated pod "annotationupdate588f3ac5-19c3-4781-900f-7c31dbaf815b"
[AfterEach] [sig-storage] Projected downwardAPI
  test/e2e/framework/framework.go:187
Jun 23 08:04:55.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8287" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":357,"completed":331,"skipped":6088,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Runtime
... skipping 13 lines ...
Jun 23 08:04:59.276: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
Jun 23 08:04:59.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8112" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":357,"completed":332,"skipped":6105,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 24 lines ...
• [SLOW TEST:13.162 seconds]
[sig-api-machinery] ResourceQuota
test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":357,"completed":333,"skipped":6117,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Runtime
... skipping 3 lines ...
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jun 23 08:05:16.611: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  test/e2e/framework/framework.go:187
Jun 23 08:05:16.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6505" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":357,"completed":334,"skipped":6135,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 17 lines ...
• [SLOW TEST:5.675 seconds]
[sig-node] InitContainer [NodeConformance]
test/e2e/common/node/framework.go:23
  should invoke init containers on a RestartNever pod [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":357,"completed":335,"skipped":6180,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
S
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 7 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/common/node/sysctl.go:67
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
Jun 23 08:05:24.398: INFO: Waiting up to 3m0s for pod "sysctl-8ff8644f-545e-4c6e-94c8-eda9b16aead8" in namespace "sysctl-3014" to be "completed"
Jun 23 08:05:24.401: INFO: Pod "sysctl-8ff8644f-545e-4c6e-94c8-eda9b16aead8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.914519ms
Jun 23 08:05:26.408: INFO: Pod "sysctl-8ff8644f-545e-4c6e-94c8-eda9b16aead8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009421961s
Jun 23 08:05:26.408: INFO: Pod "sysctl-8ff8644f-545e-4c6e-94c8-eda9b16aead8" satisfied condition "completed"
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  test/e2e/framework/framework.go:187
Jun 23 08:05:26.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3014" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":357,"completed":336,"skipped":6181,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 44 lines ...
• [SLOW TEST:10.239 seconds]
[sig-api-machinery] Garbage collector
test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":357,"completed":337,"skipped":6213,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Secrets
... skipping 19 lines ...
STEP: Creating secret with name s-test-opt-create-764702ea-9fe1-47c8-8116-a17b59c7243f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  test/e2e/framework/framework.go:187
Jun 23 08:05:40.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-226" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":357,"completed":338,"skipped":6226,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 3 lines ...
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test emptydir 0777 on node default medium
Jun 23 08:05:40.931: INFO: Waiting up to 5m0s for pod "pod-50787c4f-040f-45ce-8c1d-912c93cce146" in namespace "emptydir-6228" to be "Succeeded or Failed"
Jun 23 08:05:40.939: INFO: Pod "pod-50787c4f-040f-45ce-8c1d-912c93cce146": Phase="Pending", Reason="", readiness=false. Elapsed: 7.834661ms
Jun 23 08:05:42.945: INFO: Pod "pod-50787c4f-040f-45ce-8c1d-912c93cce146": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013843346s
Jun 23 08:05:44.953: INFO: Pod "pod-50787c4f-040f-45ce-8c1d-912c93cce146": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022111254s
Jun 23 08:05:46.943: INFO: Pod "pod-50787c4f-040f-45ce-8c1d-912c93cce146": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012185854s
STEP: Saw pod success
Jun 23 08:05:46.943: INFO: Pod "pod-50787c4f-040f-45ce-8c1d-912c93cce146" satisfied condition "Succeeded or Failed"
Jun 23 08:05:46.947: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-qsw7 pod pod-50787c4f-040f-45ce-8c1d-912c93cce146 container test-container: <nil>
STEP: delete the pod
Jun 23 08:05:46.995: INFO: Waiting for pod pod-50787c4f-040f-45ce-8c1d-912c93cce146 to disappear
Jun 23 08:05:46.999: INFO: Pod pod-50787c4f-040f-45ce-8c1d-912c93cce146 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  test/e2e/framework/framework.go:187
... skipping 3 lines ...
• [SLOW TEST:6.136 seconds]
[sig-storage] EmptyDir volumes
test/e2e/common/storage/framework.go:23
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":357,"completed":339,"skipped":6235,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-cli] Kubectl client
... skipping 38 lines ...
test/e2e/kubectl/framework.go:23
  Kubectl replace
  test/e2e/kubectl/kubectl.go:1720
    should update a single-container pod's image  [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":357,"completed":340,"skipped":6236,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] Services
... skipping 28 lines ...
Jun 23 08:05:59.689: INFO: Waiting up to 5m0s for pod "execpod-affinitywxpcn" in namespace "services-2263" to be "running"
Jun 23 08:05:59.709: INFO: Pod "execpod-affinitywxpcn": Phase="Pending", Reason="", readiness=false. Elapsed: 20.954572ms
Jun 23 08:06:01.714: INFO: Pod "execpod-affinitywxpcn": Phase="Running", Reason="", readiness=true. Elapsed: 2.025409494s
Jun 23 08:06:01.714: INFO: Pod "execpod-affinitywxpcn" satisfied condition "running"
Jun 23 08:06:02.721: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2263 exec execpod-affinitywxpcn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
Jun 23 08:06:03.904: INFO: rc: 1
Jun 23 08:06:03.904: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2263 exec execpod-affinitywxpcn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport-timeout 80
+ echo hostName
nc: connect to affinity-nodeport-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 08:06:04.904: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2263 exec execpod-affinitywxpcn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
Jun 23 08:06:06.143: INFO: rc: 1
Jun 23 08:06:06.143: INFO: Service reachability failing with error: error running /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2263 exec execpod-affinitywxpcn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport-timeout 80
nc: connect to affinity-nodeport-timeout port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Jun 23 08:06:06.905: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2263 exec execpod-affinitywxpcn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80'
Jun 23 08:06:07.065: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n"
Jun 23 08:06:07.066: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Jun 23 08:06:07.066: INFO: Running '/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubectl --server=https://35.202.0.82 --kubeconfig=/logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig --namespace=services-2263 exec execpod-affinitywxpcn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.221.0 80'
... skipping 47 lines ...
• [SLOW TEST:56.598 seconds]
[sig-network] Services
test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":357,"completed":341,"skipped":6276,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Downward API
... skipping 3 lines ...
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward api env vars
Jun 23 08:06:50.767: INFO: Waiting up to 5m0s for pod "downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d" in namespace "downward-api-4362" to be "Succeeded or Failed"
Jun 23 08:06:50.772: INFO: Pod "downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.932558ms
Jun 23 08:06:52.776: INFO: Pod "downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009631454s
Jun 23 08:06:54.779: INFO: Pod "downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01211124s
STEP: Saw pod success
Jun 23 08:06:54.779: INFO: Pod "downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d" satisfied condition "Succeeded or Failed"
Jun 23 08:06:54.782: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d container dapi-container: <nil>
STEP: delete the pod
Jun 23 08:06:54.806: INFO: Waiting for pod downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d to disappear
Jun 23 08:06:54.809: INFO: Pod downward-api-e3b16d60-f07a-4e88-bbc3-70402a5a414d no longer exists
[AfterEach] [sig-node] Downward API
  test/e2e/framework/framework.go:187
Jun 23 08:06:54.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4362" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":357,"completed":342,"skipped":6281,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Subpath
... skipping 7 lines ...
  test/e2e/storage/subpath.go:40
STEP: Setting up data
[It] should support subpaths with configmap pod [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating pod pod-subpath-test-configmap-4zk8
STEP: Creating a pod to test atomic-volume-subpath
Jun 23 08:06:54.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4zk8" in namespace "subpath-7242" to be "Succeeded or Failed"
Jun 23 08:06:54.890: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.300516ms
Jun 23 08:06:56.897: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 2.016741815s
Jun 23 08:06:58.895: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 4.014625541s
Jun 23 08:07:00.894: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 6.013361063s
Jun 23 08:07:02.895: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 8.013967712s
Jun 23 08:07:04.898: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 10.017055667s
... skipping 2 lines ...
Jun 23 08:07:10.893: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 16.012837218s
Jun 23 08:07:12.901: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 18.020733032s
Jun 23 08:07:14.895: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=true. Elapsed: 20.014490826s
Jun 23 08:07:16.897: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Running", Reason="", readiness=false. Elapsed: 22.016510769s
Jun 23 08:07:18.895: INFO: Pod "pod-subpath-test-configmap-4zk8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.014149794s
STEP: Saw pod success
Jun 23 08:07:18.895: INFO: Pod "pod-subpath-test-configmap-4zk8" satisfied condition "Succeeded or Failed"
Jun 23 08:07:18.898: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-subpath-test-configmap-4zk8 container test-container-subpath-configmap-4zk8: <nil>
STEP: delete the pod
Jun 23 08:07:18.923: INFO: Waiting for pod pod-subpath-test-configmap-4zk8 to disappear
Jun 23 08:07:18.927: INFO: Pod pod-subpath-test-configmap-4zk8 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4zk8
Jun 23 08:07:18.927: INFO: Deleting pod "pod-subpath-test-configmap-4zk8" in namespace "subpath-7242"
... skipping 7 lines ...
test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  test/e2e/storage/subpath.go:36
    should support subpaths with configmap pod [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Conformance]","total":357,"completed":343,"skipped":6291,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 7 lines ...
  test/e2e/framework/framework.go:647
Jun 23 08:07:19.003: INFO: >>> kubeConfig: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  test/e2e/framework/framework.go:187
Jun 23 08:07:22.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2683" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":357,"completed":344,"skipped":6300,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should manage the lifecycle of a job [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Job
... skipping 33 lines ...
• [SLOW TEST:10.187 seconds]
[sig-apps] Job
test/e2e/apps/framework.go:23
  should manage the lifecycle of a job [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Job should manage the lifecycle of a job [Conformance]","total":357,"completed":345,"skipped":6311,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 28 lines ...
• [SLOW TEST:13.206 seconds]
[sig-api-machinery] Namespaces [Serial]
test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":357,"completed":346,"skipped":6322,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSS
------------------------------
[sig-network] EndpointSliceMirroring 
  should mirror a custom Endpoints resource through create update and delete [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] EndpointSliceMirroring
... skipping 21 lines ...
• [SLOW TEST:6.133 seconds]
[sig-network] EndpointSliceMirroring
test/e2e/network/common/framework.go:23
  should mirror a custom Endpoints resource through create update and delete [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":357,"completed":347,"skipped":6326,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 69 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/framework/framework.go:187
Jun 23 08:07:53.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1344" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  test/e2e/scheduling/predicates.go:83
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":357,"completed":348,"skipped":6356,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:52.225 seconds]
[sig-node] Probing container
test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":357,"completed":349,"skipped":6393,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 55 lines ...
• [SLOW TEST:10.181 seconds]
[sig-apps] Daemon set [Serial]
test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":357,"completed":350,"skipped":6446,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 17 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  test/e2e/framework/framework.go:187
Jun 23 08:08:59.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2945" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":357,"completed":351,"skipped":6461,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should have Endpoints and EndpointSlices pointing to API Server [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-network] EndpointSlice
... skipping 10 lines ...
Jun 23 08:08:59.961: INFO: Endpoints addresses: [35.202.0.82] , ports: [443]
Jun 23 08:08:59.961: INFO: EndpointSlices addresses: [35.202.0.82] , ports: [443]
[AfterEach] [sig-network] EndpointSlice
  test/e2e/framework/framework.go:187
Jun 23 08:08:59.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-3062" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":357,"completed":352,"skipped":6490,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] Downward API volume
... skipping 5 lines ...
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  test/e2e/common/storage/downwardapi_volume.go:43
[It] should provide podname only [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating a pod to test downward API volume plugin
Jun 23 08:09:00.041: INFO: Waiting up to 5m0s for pod "downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648" in namespace "downward-api-3471" to be "Succeeded or Failed"
Jun 23 08:09:00.047: INFO: Pod "downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06059ms
Jun 23 08:09:02.052: INFO: Pod "downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010684463s
Jun 23 08:09:04.053: INFO: Pod "downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011475659s
STEP: Saw pod success
Jun 23 08:09:04.053: INFO: Pod "downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648" satisfied condition "Succeeded or Failed"
Jun 23 08:09:04.056: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648 container client-container: <nil>
STEP: delete the pod
Jun 23 08:09:04.098: INFO: Waiting for pod downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648 to disappear
Jun 23 08:09:04.102: INFO: Pod downwardapi-volume-edc67b19-7448-4b6f-9770-ede76cd46648 no longer exists
[AfterEach] [sig-storage] Downward API volume
  test/e2e/framework/framework.go:187
Jun 23 08:09:04.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3471" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":357,"completed":353,"skipped":6524,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] ConfigMap
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Waiting for kube-root-ca.crt to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
STEP: Creating configMap configmap-3566/configmap-test-9f795b0a-3cce-4ccc-9507-34f3cd564367
STEP: Creating a pod to test consume configMaps
Jun 23 08:09:04.178: INFO: Waiting up to 5m0s for pod "pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621" in namespace "configmap-3566" to be "Succeeded or Failed"
Jun 23 08:09:04.189: INFO: Pod "pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621": Phase="Pending", Reason="", readiness=false. Elapsed: 10.833507ms
Jun 23 08:09:06.200: INFO: Pod "pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02149349s
Jun 23 08:09:08.195: INFO: Pod "pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016112121s
STEP: Saw pod success
Jun 23 08:09:08.195: INFO: Pod "pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621" satisfied condition "Succeeded or Failed"
Jun 23 08:09:08.198: INFO: Trying to get logs from node kt2-d118eff5-f2b9-minion-group-jjkh pod pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621 container env-test: <nil>
STEP: delete the pod
Jun 23 08:09:08.220: INFO: Waiting for pod pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621 to disappear
Jun 23 08:09:08.227: INFO: Pod pod-configmaps-db317d07-57bd-4d7e-a02d-a82f24242621 no longer exists
[AfterEach] [sig-node] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 08:09:08.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3566" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":357,"completed":354,"skipped":6555,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 39 lines ...
test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  test/e2e/common/node/lifecycle_hook.go:46
    should execute poststart http hook properly [NodeConformance] [Conformance]
    test/e2e/framework/framework.go:647
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":357,"completed":355,"skipped":6668,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  test/e2e/framework/framework.go:647
[BeforeEach] [sig-storage] ConfigMap
... skipping 14 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  test/e2e/framework/framework.go:187
Jun 23 08:09:18.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1049" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":357,"completed":356,"skipped":6676,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}
SSSSSSSSSSJun 23 08:09:18.567: INFO: Running AfterSuite actions on all nodes
Jun 23 08:09:18.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func19.2
Jun 23 08:09:18.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func9.2
Jun 23 08:09:18.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Jun 23 08:09:18.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Jun 23 08:09:18.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Jun 23 08:09:18.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Jun 23 08:09:18.567: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3
Jun 23 08:09:18.567: INFO: Running AfterSuite actions on node 1
Jun 23 08:09:18.567: INFO: Dumping logs locally to: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791
Jun 23 08:09:18.568: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

JUnit report was created: /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/junit_01.xml
{"msg":"Test Suite completed","total":357,"completed":356,"skipped":6686,"failed":1,"failures":["[sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-api-machinery] Namespaces [Serial] [It] should ensure that all services are removed when a namespace is deleted [Conformance] 
test/e2e/apimachinery/namespace.go:181

Ran 357 of 7043 Specs in 5910.557 seconds
FAIL! -- 356 Passed | 1 Failed | 0 Pending | 6686 Skipped
--- FAIL: TestE2E (5912.16s)
FAIL

Ginkgo ran 1 suite in 1h38m32.253392201s
Test Suite Failed
F0623 08:09:18.608462   95089 ginkgo.go:215] failed to run ginkgo tester: exit status 1
I0623 08:09:18.610323    2928 down.go:29] GCE deployer starting Down()
I0623 08:09:18.610400    2928 common.go:212] checking locally built kubectl ...
I0623 08:09:18.610561    2928 down.go:43] About to run script at: /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh
I0623 08:09:18.610580    2928 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh 
Bringing down cluster using provider: gce
... calling verify-prereqs
... skipping 40 lines ...
Property "users.k8s-infra-e2e-boskos-115_kt2-d118eff5-f2b9-basic-auth" unset.
Property "contexts.k8s-infra-e2e-boskos-115_kt2-d118eff5-f2b9" unset.
Cleared config for k8s-infra-e2e-boskos-115_kt2-d118eff5-f2b9 from /logs/artifacts/d118eff5-f2b9-11ec-8dfe-daa417708791/kubetest2-kubeconfig
Done
I0623 08:14:55.548288    2928 down.go:53] about to delete nodeport firewall rule
I0623 08:14:55.548351    2928 local.go:42] ⚙️ gcloud compute firewall-rules delete --project k8s-infra-e2e-boskos-115 kt2-d118eff5-f2b9-minion-nodeports
ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-boskos-115/global/firewalls/kt2-d118eff5-f2b9-minion-nodeports' was not found

W0623 08:14:56.406242    2928 firewall.go:62] failed to delete nodeports firewall rules: might be deleted already?
I0623 08:14:56.406272    2928 down.go:59] releasing boskos project
I0623 08:14:56.418864    2928 boskos.go:83] Boskos heartbeat func received signal to close
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
43d7455e3bc4
... skipping 4 lines ...