This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-09-16 23:11
Elapsed2h25m
Revisionmaster

No Test Failures!


Error lines from build-log.txt

... skipping 347 lines ...
Trying to find master named 'kt2-280c76ac-1743-master'
Looking for address 'kt2-280c76ac-1743-master-ip'
Using master: kt2-280c76ac-1743-master (external IP: 35.222.74.146; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

................Kubernetes cluster created.
Cluster "k8s-infra-e2e-boskos-082_kt2-280c76ac-1743" set.
User "k8s-infra-e2e-boskos-082_kt2-280c76ac-1743" set.
Context "k8s-infra-e2e-boskos-082_kt2-280c76ac-1743" created.
Switched to context "k8s-infra-e2e-boskos-082_kt2-280c76ac-1743".
... skipping 25 lines ...
kt2-280c76ac-1743-minion-group-8xgx   Ready                      <none>   13s   v1.23.0-alpha.2.63+924f1968828da3
kt2-280c76ac-1743-minion-group-rr86   Ready                      <none>   13s   v1.23.0-alpha.2.63+924f1968828da3
kt2-280c76ac-1743-minion-group-xp78   Ready                      <none>   3s    v1.23.0-alpha.2.63+924f1968828da3
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
Cluster validation succeeded
Done, listing cluster services:
... skipping 40 lines ...
Specify --start=53075 in the next get-serial-port-output invocation to get only the new output starting from here.
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/cluster-autoscaler.log*: No such file or directory
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
Dumping logs from nodes locally to '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/cluster-logs'
Detecting nodes in the cluster
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Changing logfiles to be world-readable for download
Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from kt2-280c76ac-1743-minion-group-xp78
... skipping 8 lines ...
load pubkey "/root/.ssh/google_compute_engine": invalid format
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
load pubkey "/root/.ssh/google_compute_engine": invalid format
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
scp: /var/log/fluentd.log*: No such file or directory
scp: /var/log/node-problem-detector.log*: No such file or directory
scp: /var/log/kubelet.cov*: No such file or directory
scp: /var/log/startupscript.log*: No such file or directory
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
INSTANCE_GROUPS=kt2-280c76ac-1743-minion-group
NODE_NAMES=kt2-280c76ac-1743-minion-group-8xgx kt2-280c76ac-1743-minion-group-rr86 kt2-280c76ac-1743-minion-group-xp78
Failures for kt2-280c76ac-1743-minion-group (if any):
I0916 23:38:36.334984    2918 dumplogs.go:121] About to run: [/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl cluster-info dump]
I0916 23:38:36.335023    2918 local.go:42] ⚙️ /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl cluster-info dump
I0916 23:38:37.761508    2918 local.go:42] ⚙️ /home/prow/go/bin/kubetest2-tester-ginkgo ; --focus-regex=\[Conformance\] ; --use-built-binaries
I0916 23:38:37.850326   97228 ginkgo.go:120] Using kubeconfig at /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
I0916 23:38:37.850478   97228 ginkgo.go:90] Running ginkgo test as /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/ginkgo [--nodes=1 /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/e2e.test -- --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --kubectl-path=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --ginkgo.flakeAttempts=1 --ginkgo.skip= --ginkgo.focus=\[Conformance\] --report-dir=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1]
Sep 16 23:38:37.940: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used.
I0916 23:38:37.940950   97243 e2e.go:127] Starting e2e run "c1cfc900-8279-4ada-8446-7250f5b82b39" on Ginkgo node 1
{"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0}
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1631835517 - Will randomize all specs
Will run 346 of 6852 specs

Sep 16 23:38:39.954: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Sep 16 23:38:39.957: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Sep 16 23:38:39.976: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
Sep 16 23:38:40.014: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:40.014: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:40.014: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
Sep 16 23:38:40.014: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:40.014: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:40.014: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:40.014: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:40.014: INFO: 
Sep 16 23:38:42.041: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:42.041: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:42.041: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
Sep 16 23:38:42.041: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:42.041: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:42.041: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:42.041: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:42.041: INFO: 
Sep 16 23:38:44.040: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:44.040: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:44.040: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
Sep 16 23:38:44.040: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:44.040: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:44.040: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:44.040: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:44.040: INFO: 
Sep 16 23:38:46.052: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:46.052: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:46.052: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
Sep 16 23:38:46.052: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:46.053: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:46.053: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:46.053: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:46.053: INFO: 
Sep 16 23:38:48.043: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:48.043: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:48.043: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (8 seconds elapsed)
Sep 16 23:38:48.043: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:48.043: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:48.043: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:48.043: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:48.043: INFO: 
Sep 16 23:38:50.044: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:50.044: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:50.044: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (10 seconds elapsed)
Sep 16 23:38:50.044: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:50.044: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:50.044: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:50.044: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:50.044: INFO: 
Sep 16 23:38:52.048: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:52.048: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:52.048: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (12 seconds elapsed)
Sep 16 23:38:52.048: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:52.048: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:52.048: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:52.048: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:52.048: INFO: 
Sep 16 23:38:54.099: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:54.099: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:54.099: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (14 seconds elapsed)
Sep 16 23:38:54.099: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:54.099: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:54.099: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:54.099: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:54.099: INFO: 
Sep 16 23:38:56.049: INFO: The status of Pod l7-default-backend-79858d8f86-j8wsx is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:56.049: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:56.049: INFO: 30 / 32 pods in namespace 'kube-system' are running and ready (16 seconds elapsed)
Sep 16 23:38:56.049: INFO: expected 8 pod replicas in namespace 'kube-system', 6 are Running and Ready.
Sep 16 23:38:56.049: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:56.049: INFO: l7-default-backend-79858d8f86-j8wsx     kt2-280c76ac-1743-minion-group-rr86  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:16 +0000 UTC  }]
Sep 16 23:38:56.049: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:56.049: INFO: 
Sep 16 23:38:58.042: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:38:58.042: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (18 seconds elapsed)
Sep 16 23:38:58.042: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:38:58.042: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:38:58.042: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:38:58.042: INFO: 
Sep 16 23:39:00.046: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:00.046: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (20 seconds elapsed)
Sep 16 23:39:00.046: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:00.046: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:00.046: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:00.046: INFO: 
Sep 16 23:39:02.041: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:02.041: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (22 seconds elapsed)
Sep 16 23:39:02.041: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:02.041: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:02.041: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:02.041: INFO: 
Sep 16 23:39:04.044: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:04.044: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (24 seconds elapsed)
Sep 16 23:39:04.044: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:04.044: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:04.044: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:04.044: INFO: 
Sep 16 23:39:06.049: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:06.049: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (26 seconds elapsed)
Sep 16 23:39:06.049: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:06.049: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:06.049: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:06.049: INFO: 
Sep 16 23:39:08.041: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:08.041: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (28 seconds elapsed)
Sep 16 23:39:08.041: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:08.041: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:08.041: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:08.041: INFO: 
Sep 16 23:39:10.044: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:10.044: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (30 seconds elapsed)
Sep 16 23:39:10.044: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:10.044: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:10.044: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:10.044: INFO: 
Sep 16 23:39:12.052: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:12.052: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (32 seconds elapsed)
Sep 16 23:39:12.052: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:12.052: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:12.052: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:12.052: INFO: 
Sep 16 23:39:14.040: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:14.040: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (34 seconds elapsed)
Sep 16 23:39:14.040: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:14.040: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:14.040: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:14.040: INFO: 
Sep 16 23:39:16.048: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:16.048: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (36 seconds elapsed)
Sep 16 23:39:16.048: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:16.048: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:16.048: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:16.048: INFO: 
Sep 16 23:39:18.044: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:18.044: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (38 seconds elapsed)
Sep 16 23:39:18.044: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:18.044: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:18.044: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:18.044: INFO: 
Sep 16 23:39:20.041: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:20.041: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (40 seconds elapsed)
Sep 16 23:39:20.041: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:20.041: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:20.041: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:20.041: INFO: 
Sep 16 23:39:22.043: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:22.043: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (42 seconds elapsed)
Sep 16 23:39:22.043: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:22.043: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:22.043: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:22.043: INFO: 
Sep 16 23:39:24.040: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:24.040: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (44 seconds elapsed)
Sep 16 23:39:24.040: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:24.040: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:24.040: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:24.040: INFO: 
Sep 16 23:39:26.055: INFO: The status of Pod metrics-server-v0.5.0-6554f5dbd8-c7f7n is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
Sep 16 23:39:26.055: INFO: 31 / 32 pods in namespace 'kube-system' are running and ready (46 seconds elapsed)
Sep 16 23:39:26.055: INFO: expected 8 pod replicas in namespace 'kube-system', 7 are Running and Ready.
Sep 16 23:39:26.055: INFO: POD                                     NODE                                 PHASE    GRACE  CONDITIONS
Sep 16 23:39:26.055: INFO: metrics-server-v0.5.0-6554f5dbd8-c7f7n  kt2-280c76ac-1743-minion-group-xp78  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2021-09-16 23:37:37 +0000 UTC  }]
Sep 16 23:39:26.055: INFO: 
Sep 16 23:39:28.040: INFO: 32 / 32 pods in namespace 'kube-system' are running and ready (48 seconds elapsed)
... skipping 45 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1526
    should create a pod from an image when restart is Never  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":346,"completed":1,"skipped":2,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:246.808 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":2,"skipped":15,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:43:45.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1389" for this suite.
STEP: Destroying namespace "webhook-1389-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":3,"skipped":15,"failed":0}
SSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 16 23:43:48.354: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:43:48.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5321" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":4,"skipped":22,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  should validate Deployment Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 62 lines ...
Sep 16 23:43:50.740: INFO: Pod "test-deployment-pw7wk-d9bb78c49-grh2h" is available:
&Pod{ObjectMeta:{test-deployment-pw7wk-d9bb78c49-grh2h test-deployment-pw7wk-d9bb78c49- deployment-3868  33c74017-288a-49a2-83ff-f4277535d4a3 1857 0 2021-09-16 23:43:48 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:d9bb78c49] map[] [{apps/v1 ReplicaSet test-deployment-pw7wk-d9bb78c49 f60b70ab-7bc6-4b38-9160-d8aba2d860df 0xc0022a3987 0xc0022a3988}] []  [{kube-controller-manager Update v1 2021-09-16 23:43:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f60b70ab-7bc6-4b38-9160-d8aba2d860df\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-16 23:43:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.7\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-krvj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-krvj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-xp78,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 23:43:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 23:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 23:43:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-16 23:43:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:10.64.3.7,StartTime:2021-09-16 23:43:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2021-09-16 23:43:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://c12ef5395f2e9b806a9c3d4c9b3909b8c1b6d8f14ca8eda50e1a9738f637055a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:43:50.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3868" for this suite.
•{"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":346,"completed":5,"skipped":31,"failed":0}
SSSSS
------------------------------
[sig-network] EndpointSlice 
  should support creating EndpointSlice API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 24 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:43:51.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-9299" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":346,"completed":6,"skipped":36,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 44 lines ...
• [SLOW TEST:16.802 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":346,"completed":7,"skipped":46,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] version v1
... skipping 345 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
    should proxy through a service and a pod  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":346,"completed":8,"skipped":65,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:11.111 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":346,"completed":9,"skipped":92,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 60 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1398
    should be able to retrieve and filter logs  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":346,"completed":10,"skipped":103,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 81 lines ...
• [SLOW TEST:312.295 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":11,"skipped":118,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 16 23:49:44.466: INFO: The status of Pod busybox-scheduling-af60ed59-712e-46de-b560-4e8494a510ce is Pending, waiting for it to be Running (with Ready = true)
Sep 16 23:49:46.472: INFO: The status of Pod busybox-scheduling-af60ed59-712e-46de-b560-4e8494a510ce is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:49:46.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-405" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":346,"completed":12,"skipped":150,"failed":0}
SSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should guarantee kube-root-ca.crt exist in any namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: waiting for the root ca configmap reconciled
Sep 16 23:49:47.583: INFO: Reconciled root ca configmap in namespace "svcaccounts-9811"
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:49:47.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9811" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":13,"skipped":160,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 16 23:49:47.698: INFO: stderr: ""
Sep 16 23:49:47.698: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23+\", GitVersion:\"v1.23.0-alpha.2.63+924f1968828da3\", GitCommit:\"924f1968828da3b0c20a9eea2e19236a47fa689f\", GitTreeState:\"clean\", BuildDate:\"2021-09-16T21:09:26Z\", GoVersion:\"go1.17.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23+\", GitVersion:\"v1.23.0-alpha.2.63+924f1968828da3\", GitCommit:\"924f1968828da3b0c20a9eea2e19236a47fa689f\", GitTreeState:\"clean\", BuildDate:\"2021-09-16T21:09:26Z\", GoVersion:\"go1.17.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:49:47.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6745" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":346,"completed":14,"skipped":176,"failed":0}
S
------------------------------
[sig-node] Pods 
  should run through the lifecycle of Pods and PodStatus [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 30 lines ...
Sep 16 23:49:51.750: INFO: observed event type MODIFIED
Sep 16 23:49:51.764: INFO: observed event type MODIFIED
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:49:51.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-261" for this suite.
•{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":15,"skipped":177,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 19 lines ...
• [SLOW TEST:310.148 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":16,"skipped":188,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 13 lines ...
Sep 16 23:55:03.081: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:55:03.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9335" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":346,"completed":17,"skipped":191,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 91 lines ...
• [SLOW TEST:45.839 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":18,"skipped":206,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 23:55:48.995: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-349fbb06-c61b-40b9-bb1b-61e4e7aab0ed" in namespace "security-context-test-2319" to be "Succeeded or Failed"
Sep 16 23:55:49.001: INFO: Pod "busybox-readonly-false-349fbb06-c61b-40b9-bb1b-61e4e7aab0ed": Phase="Pending", Reason="", readiness=false. Elapsed: 5.476319ms
Sep 16 23:55:51.006: INFO: Pod "busybox-readonly-false-349fbb06-c61b-40b9-bb1b-61e4e7aab0ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0105538s
Sep 16 23:55:51.006: INFO: Pod "busybox-readonly-false-349fbb06-c61b-40b9-bb1b-61e4e7aab0ed" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:55:51.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2319" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":346,"completed":19,"skipped":241,"failed":0}

------------------------------
[sig-apps] ReplicaSet 
  should validate Replicaset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 40 lines ...
• [SLOW TEST:5.201 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should validate Replicaset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":20,"skipped":241,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 22 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:56:00.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4363" for this suite.
STEP: Destroying namespace "webhook-4363-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":21,"skipped":264,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 25 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:56:04.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1887" for this suite.
STEP: Destroying namespace "webhook-1887-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":346,"completed":22,"skipped":269,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 37 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1560
    should update a single-container pod's image  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":346,"completed":23,"skipped":276,"failed":0}
S
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":24,"skipped":277,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
• [SLOW TEST:6.471 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":25,"skipped":299,"failed":0}
SSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":346,"completed":26,"skipped":302,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-54eab012-fbb6-4f94-a7d2-011e37a59810
STEP: Creating a pod to test consume secrets
Sep 16 23:56:34.601: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb" in namespace "projected-2943" to be "Succeeded or Failed"
Sep 16 23:56:34.611: INFO: Pod "pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.528382ms
Sep 16 23:56:36.618: INFO: Pod "pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016408712s
Sep 16 23:56:38.622: INFO: Pod "pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02090176s
STEP: Saw pod success
Sep 16 23:56:38.622: INFO: Pod "pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb" satisfied condition "Succeeded or Failed"
Sep 16 23:56:38.625: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 23:56:38.679: INFO: Waiting for pod pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb to disappear
Sep 16 23:56:38.684: INFO: Pod pod-projected-secrets-dd45e72a-8ec9-46c3-977a-f308dbb6debb no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:56:38.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2943" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":27,"skipped":310,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Sep 16 23:56:54.942: INFO: File wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:56:54.957: INFO: File jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:56:54.957: INFO: Lookups using dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 failed for: [wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local]

Sep 16 23:56:59.965: INFO: File wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:56:59.970: INFO: File jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:56:59.970: INFO: Lookups using dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 failed for: [wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local]

I0916 23:57:01.570307    2918 boskos.go:86] Sending heartbeat to Boskos
Sep 16 23:57:04.968: INFO: File wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:57:04.974: INFO: File jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:57:04.974: INFO: Lookups using dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 failed for: [wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local]

Sep 16 23:57:09.968: INFO: File wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:57:09.976: INFO: File jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local from pod  dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 contains 'foo.example.com.
' instead of 'bar.example.com.'
Sep 16 23:57:09.976: INFO: Lookups using dns-9287/dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 failed for: [wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local jessie_udp@dns-test-service-3.dns-9287.svc.cluster.local]

Sep 16 23:57:14.971: INFO: DNS probes using dns-test-c74ae732-a3be-4ac9-ac77-d6b6f00b89a9 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9287.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9287.svc.cluster.local; sleep 1; done
... skipping 16 lines ...
• [SLOW TEST:38.428 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":28,"skipped":316,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 12 lines ...
STEP: reading a file in the container
Sep 16 23:57:20.040: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl exec --namespace=svcaccounts-1569 pod-service-account-dd40a1ef-af8d-47f8-b5c6-36c20ac9933b -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:57:20.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1569" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":346,"completed":29,"skipped":358,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 11 lines ...
Sep 16 23:57:22.402: INFO: The status of Pod pod-hostip-893795e8-482b-439f-954a-c0e3bf47845c is Running (Ready = true)
Sep 16 23:57:22.437: INFO: Pod pod-hostip-893795e8-482b-439f-954a-c0e3bf47845c has hostIP: 10.128.0.4
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:57:22.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8826" for this suite.
•{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":346,"completed":30,"skipped":372,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected combined
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-projected-all-test-volume-61043344-4da1-4792-9a62-35615d45fb77
STEP: Creating secret with name secret-projected-all-test-volume-9bce267f-1396-4768-81bd-c4f08da7ed1b
STEP: Creating a pod to test Check all projections for projected volume plugin
Sep 16 23:57:22.533: INFO: Waiting up to 5m0s for pod "projected-volume-a0b7520b-d1ee-4083-891d-2f8e071e4863" in namespace "projected-3319" to be "Succeeded or Failed"
Sep 16 23:57:22.542: INFO: Pod "projected-volume-a0b7520b-d1ee-4083-891d-2f8e071e4863": Phase="Pending", Reason="", readiness=false. Elapsed: 8.879706ms
Sep 16 23:57:24.546: INFO: Pod "projected-volume-a0b7520b-d1ee-4083-891d-2f8e071e4863": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013282729s
STEP: Saw pod success
Sep 16 23:57:24.546: INFO: Pod "projected-volume-a0b7520b-d1ee-4083-891d-2f8e071e4863" satisfied condition "Succeeded or Failed"
Sep 16 23:57:24.549: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod projected-volume-a0b7520b-d1ee-4083-891d-2f8e071e4863 container projected-all-volume-test: <nil>
STEP: delete the pod
Sep 16 23:57:24.567: INFO: Waiting for pod projected-volume-a0b7520b-d1ee-4083-891d-2f8e071e4863 to disappear
Sep 16 23:57:24.572: INFO: Pod projected-volume-a0b7520b-d1ee-4083-891d-2f8e071e4863 no longer exists
[AfterEach] [sig-storage] Projected combined
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:57:24.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3319" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":346,"completed":31,"skipped":391,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":346,"completed":32,"skipped":429,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 5 lines ...
[BeforeEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 16 23:57:47.438: INFO: The status of Pod server-envvars-2f08ea14-89a0-4619-a2de-9f964fe7f2cc is Pending, waiting for it to be Running (with Ready = true)
Sep 16 23:57:49.444: INFO: The status of Pod server-envvars-2f08ea14-89a0-4619-a2de-9f964fe7f2cc is Running (Ready = true)
Sep 16 23:57:49.488: INFO: Waiting up to 5m0s for pod "client-envvars-471bc2f7-78a5-4caa-8e20-583393a61f05" in namespace "pods-3819" to be "Succeeded or Failed"
Sep 16 23:57:49.515: INFO: Pod "client-envvars-471bc2f7-78a5-4caa-8e20-583393a61f05": Phase="Pending", Reason="", readiness=false. Elapsed: 27.464696ms
Sep 16 23:57:51.522: INFO: Pod "client-envvars-471bc2f7-78a5-4caa-8e20-583393a61f05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.034575554s
STEP: Saw pod success
Sep 16 23:57:51.522: INFO: Pod "client-envvars-471bc2f7-78a5-4caa-8e20-583393a61f05" satisfied condition "Succeeded or Failed"
Sep 16 23:57:51.531: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod client-envvars-471bc2f7-78a5-4caa-8e20-583393a61f05 container env3cont: <nil>
STEP: delete the pod
Sep 16 23:57:51.557: INFO: Waiting for pod client-envvars-471bc2f7-78a5-4caa-8e20-583393a61f05 to disappear
Sep 16 23:57:51.564: INFO: Pod client-envvars-471bc2f7-78a5-4caa-8e20-583393a61f05 no longer exists
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:57:51.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3819" for this suite.
•{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":33,"skipped":477,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-289811af-4e00-4574-9927-5f6d058b1d89
STEP: Creating a pod to test consume secrets
Sep 16 23:57:51.650: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e1a36cfd-20f4-4ab6-999e-64c55d06b772" in namespace "projected-222" to be "Succeeded or Failed"
Sep 16 23:57:51.657: INFO: Pod "pod-projected-secrets-e1a36cfd-20f4-4ab6-999e-64c55d06b772": Phase="Pending", Reason="", readiness=false. Elapsed: 7.195023ms
Sep 16 23:57:53.661: INFO: Pod "pod-projected-secrets-e1a36cfd-20f4-4ab6-999e-64c55d06b772": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011069778s
STEP: Saw pod success
Sep 16 23:57:53.661: INFO: Pod "pod-projected-secrets-e1a36cfd-20f4-4ab6-999e-64c55d06b772" satisfied condition "Succeeded or Failed"
Sep 16 23:57:53.664: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-secrets-e1a36cfd-20f4-4ab6-999e-64c55d06b772 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 23:57:53.684: INFO: Waiting for pod pod-projected-secrets-e1a36cfd-20f4-4ab6-999e-64c55d06b772 to disappear
Sep 16 23:57:53.689: INFO: Pod pod-projected-secrets-e1a36cfd-20f4-4ab6-999e-64c55d06b772 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:57:53.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-222" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":34,"skipped":486,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 22 lines ...
• [SLOW TEST:6.201 seconds]
[sig-api-machinery] Namespaces [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":346,"completed":35,"skipped":495,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
... skipping 19 lines ...
• [SLOW TEST:35.534 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":346,"completed":36,"skipped":548,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 16 23:58:35.435: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 16 23:58:35.509: INFO: Waiting up to 5m0s for pod "downward-api-b4843aa0-4016-42fd-83fe-4ec3d600acd1" in namespace "downward-api-512" to be "Succeeded or Failed"
Sep 16 23:58:35.518: INFO: Pod "downward-api-b4843aa0-4016-42fd-83fe-4ec3d600acd1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.832121ms
Sep 16 23:58:37.522: INFO: Pod "downward-api-b4843aa0-4016-42fd-83fe-4ec3d600acd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012530333s
STEP: Saw pod success
Sep 16 23:58:37.522: INFO: Pod "downward-api-b4843aa0-4016-42fd-83fe-4ec3d600acd1" satisfied condition "Succeeded or Failed"
Sep 16 23:58:37.525: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downward-api-b4843aa0-4016-42fd-83fe-4ec3d600acd1 container dapi-container: <nil>
STEP: delete the pod
Sep 16 23:58:37.545: INFO: Waiting for pod downward-api-b4843aa0-4016-42fd-83fe-4ec3d600acd1 to disappear
Sep 16 23:58:37.549: INFO: Pod downward-api-b4843aa0-4016-42fd-83fe-4ec3d600acd1 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:58:37.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-512" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":37,"skipped":555,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 23 lines ...
• [SLOW TEST:13.207 seconds]
[sig-api-machinery] Namespaces [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":38,"skipped":569,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 82 lines ...
• [SLOW TEST:17.511 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":346,"completed":39,"skipped":593,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating secret secrets-5211/secret-test-5b09f63d-b7dd-4852-b567-ceb27d204ad0
STEP: Creating a pod to test consume secrets
Sep 16 23:59:08.341: INFO: Waiting up to 5m0s for pod "pod-configmaps-f4d785d9-76fb-48c8-9a30-a77d09c10f2a" in namespace "secrets-5211" to be "Succeeded or Failed"
Sep 16 23:59:08.355: INFO: Pod "pod-configmaps-f4d785d9-76fb-48c8-9a30-a77d09c10f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.909836ms
Sep 16 23:59:10.362: INFO: Pod "pod-configmaps-f4d785d9-76fb-48c8-9a30-a77d09c10f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021039043s
STEP: Saw pod success
Sep 16 23:59:10.362: INFO: Pod "pod-configmaps-f4d785d9-76fb-48c8-9a30-a77d09c10f2a" satisfied condition "Succeeded or Failed"
Sep 16 23:59:10.365: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-f4d785d9-76fb-48c8-9a30-a77d09c10f2a container env-test: <nil>
STEP: delete the pod
Sep 16 23:59:10.387: INFO: Waiting for pod pod-configmaps-f4d785d9-76fb-48c8-9a30-a77d09c10f2a to disappear
Sep 16 23:59:10.391: INFO: Pod pod-configmaps-f4d785d9-76fb-48c8-9a30-a77d09c10f2a no longer exists
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:10.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5211" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":40,"skipped":611,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:10.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4964" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":346,"completed":41,"skipped":617,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-4b621033-0b00-48a2-ab78-ede970055530
STEP: Creating a pod to test consume configMaps
Sep 16 23:59:10.557: INFO: Waiting up to 5m0s for pod "pod-configmaps-ccd6cea8-e2d6-49a0-9948-a69b263109e7" in namespace "configmap-3484" to be "Succeeded or Failed"
Sep 16 23:59:10.563: INFO: Pod "pod-configmaps-ccd6cea8-e2d6-49a0-9948-a69b263109e7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.780589ms
Sep 16 23:59:12.566: INFO: Pod "pod-configmaps-ccd6cea8-e2d6-49a0-9948-a69b263109e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009421769s
STEP: Saw pod success
Sep 16 23:59:12.567: INFO: Pod "pod-configmaps-ccd6cea8-e2d6-49a0-9948-a69b263109e7" satisfied condition "Succeeded or Failed"
Sep 16 23:59:12.569: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-ccd6cea8-e2d6-49a0-9948-a69b263109e7 container agnhost-container: <nil>
STEP: delete the pod
Sep 16 23:59:12.589: INFO: Waiting for pod pod-configmaps-ccd6cea8-e2d6-49a0-9948-a69b263109e7 to disappear
Sep 16 23:59:12.593: INFO: Pod pod-configmaps-ccd6cea8-e2d6-49a0-9948-a69b263109e7 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:12.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3484" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":42,"skipped":677,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 14 lines ...
STEP: Creating secret with name s-test-opt-create-3e026011-41af-437f-9d21-78a804d11ee0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:16.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1913" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":43,"skipped":693,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 34 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:18.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5930" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":44,"skipped":703,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 23:59:18.307: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 16 23:59:18.368: INFO: Waiting up to 5m0s for pod "pod-cdd1eda8-6907-4865-a2d0-dbc5330720f1" in namespace "emptydir-8186" to be "Succeeded or Failed"
Sep 16 23:59:18.372: INFO: Pod "pod-cdd1eda8-6907-4865-a2d0-dbc5330720f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.912119ms
Sep 16 23:59:20.378: INFO: Pod "pod-cdd1eda8-6907-4865-a2d0-dbc5330720f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009908154s
STEP: Saw pod success
Sep 16 23:59:20.378: INFO: Pod "pod-cdd1eda8-6907-4865-a2d0-dbc5330720f1" satisfied condition "Succeeded or Failed"
Sep 16 23:59:20.381: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod pod-cdd1eda8-6907-4865-a2d0-dbc5330720f1 container test-container: <nil>
STEP: delete the pod
Sep 16 23:59:20.429: INFO: Waiting for pod pod-cdd1eda8-6907-4865-a2d0-dbc5330720f1 to disappear
Sep 16 23:59:20.433: INFO: Pod pod-cdd1eda8-6907-4865-a2d0-dbc5330720f1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:20.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8186" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":45,"skipped":706,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 22 lines ...
Sep 16 23:59:22.881: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Sep 16 23:59:22.881: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-4181 describe pod agnhost-primary-twlt5'
Sep 16 23:59:22.957: INFO: stderr: ""
Sep 16 23:59:22.957: INFO: stdout: "Name:         agnhost-primary-twlt5\nNamespace:    kubectl-4181\nPriority:     0\nNode:         kt2-280c76ac-1743-minion-group-8xgx/10.128.0.4\nStart Time:   Thu, 16 Sep 2021 23:59:20 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.64.0.25\nIPs:\n  IP:           10.64.0.25\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://049b6de1dfe23eddb7af284983c3e170276468dd64c1b9f1ce02555c5eb898ca\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Thu, 16 Sep 2021 23:59:21 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gqh2k (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-gqh2k:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-4181/agnhost-primary-twlt5 to kt2-280c76ac-1743-minion-group-8xgx\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
Sep 16 23:59:22.957: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-4181 describe rc agnhost-primary'
Sep 16 23:59:23.039: INFO: stderr: ""
Sep 16 23:59:23.039: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-4181\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.33\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-twlt5\n"
Sep 16 23:59:23.039: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-4181 describe service agnhost-primary'
Sep 16 23:59:23.110: INFO: stderr: ""
Sep 16 23:59:23.110: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-4181\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.0.39.49\nIPs:               10.0.39.49\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.64.0.25:6379\nSession Affinity:  None\nEvents:            <none>\n"
Sep 16 23:59:23.116: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-4181 describe node kt2-280c76ac-1743-master'
Sep 16 23:59:23.221: INFO: stderr: ""
Sep 16 23:59:23.221: INFO: stdout: "Name:               kt2-280c76ac-1743-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-central1\n                    failure-domain.beta.kubernetes.io/zone=us-central1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kt2-280c76ac-1743-master\n                    kubernetes.io/os=linux\n                    node.kubernetes.io/instance-type=n1-standard-1\n                    topology.kubernetes.io/region=us-central1\n                    topology.kubernetes.io/zone=us-central1-b\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Thu, 16 Sep 2021 23:36:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nLease:\n  HolderIdentity:  kt2-280c76ac-1743-master\n  AcquireTime:     <unset>\n  RenewTime:       Thu, 16 Sep 2021 23:59:18 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Thu, 16 Sep 2021 23:37:15 +0000   Thu, 16 Sep 2021 23:37:15 +0000   RouteCreated                 RouteController created a route\n  MemoryPressure       False   Thu, 16 Sep 2021 23:57:41 +0000   Thu, 16 Sep 2021 23:36:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Thu, 16 Sep 2021 23:57:41 +0000   Thu, 16 Sep 2021 23:36:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Thu, 16 Sep 2021 23:57:41 +0000   Thu, 16 Sep 2021 23:36:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Thu, 16 Sep 2021 23:57:41 +0000   Thu, 16 Sep 2021 23:37:10 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.128.0.2\n  ExternalIP:   35.222.74.146\n  InternalDNS:  kt2-280c76ac-1743-master.c.k8s-infra-e2e-boskos-082.internal\n  Hostname:     kt2-280c76ac-1743-master.c.k8s-infra-e2e-boskos-082.internal\nCapacity:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          16293736Ki\n  hugepages-2Mi:              0\n  memory:                     3773752Ki\n  pods:                       110\nAllocatable:\n  attachable-volumes-gce-pd:  127\n  cpu:                        1\n  ephemeral-storage:          15016307073\n  hugepages-2Mi:              0\n  memory:                     3517752Ki\n  pods:                       110\nSystem Info:\n  Machine ID:                 e3875a18f35f146f4babecc10c19bc60\n  System UUID:                e3875a18-f35f-146f-4bab-ecc10c19bc60\n  Boot ID:                    ac8ca095-0e12-4731-9946-5b092e3f754a\n  Kernel Version:             5.4.129+\n  OS Image:                   Container-Optimized OS from Google\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.4.6\n  Kubelet Version:            v1.23.0-alpha.2.63+924f1968828da3\n  Kube-Proxy Version:         v1.23.0-alpha.2.63+924f1968828da3\nPodCIDR:                      10.64.2.0/24\nPodCIDRs:                     10.64.2.0/24\nProviderID:                   gce://k8s-infra-e2e-boskos-082/us-central1-b/kt2-280c76ac-1743-master\nNon-terminated Pods:          (10 in total)\n  Namespace                   Name                                                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                                ------------  ----------  ---------------  -------------  ---\n  kube-system                 etcd-server-events-kt2-280c76ac-1743-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         21m\n  kube-system                 etcd-server-kt2-280c76ac-1743-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         22m\n  kube-system                 fluentd-gcp-v3.2.0-hfsgd                            100m (10%)    1 (100%)    200Mi (5%)       500Mi (14%)    17m\n  kube-system                 konnectivity-server-kt2-280c76ac-1743-master        25m (2%)      0 (0%)      0 (0%)           0 (0%)         21m\n  kube-system                 kube-addon-manager-kt2-280c76ac-1743-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         21m\n  kube-system                 kube-apiserver-kt2-280c76ac-1743-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         21m\n  kube-system                 kube-controller-manager-kt2-280c76ac-1743-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         21m\n  kube-system                 kube-scheduler-kt2-280c76ac-1743-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         22m\n  kube-system                 l7-lb-controller-kt2-280c76ac-1743-master           10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         22m\n  kube-system                 metadata-proxy-v0.1-qlkxp                           32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      22m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests     Limits\n  --------                   --------     ------\n  cpu                        997m (99%)   1032m (103%)\n  memory                     345Mi (10%)  545Mi (15%)\n  ephemeral-storage          0 (0%)       0 (0%)\n  hugepages-2Mi              0 (0%)       0 (0%)\n  attachable-volumes-gce-pd  0            0\nEvents:                      <none>\n"
Sep 16 23:59:23.221: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-4181 describe namespace kubectl-4181'
Sep 16 23:59:23.294: INFO: stderr: ""
Sep 16 23:59:23.294: INFO: stdout: "Name:         kubectl-4181\nLabels:       e2e-framework=kubectl\n              e2e-run=c1cfc900-8279-4ada-8446-7250f5b82b39\n              kubernetes.io/metadata.name=kubectl-4181\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:23.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4181" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":346,"completed":46,"skipped":736,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 23:59:23.451: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f94cee9e-1661-4e30-8e6b-d2bd41aaee5d" in namespace "downward-api-2785" to be "Succeeded or Failed"
Sep 16 23:59:23.460: INFO: Pod "downwardapi-volume-f94cee9e-1661-4e30-8e6b-d2bd41aaee5d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.553898ms
Sep 16 23:59:25.500: INFO: Pod "downwardapi-volume-f94cee9e-1661-4e30-8e6b-d2bd41aaee5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.049712698s
STEP: Saw pod success
Sep 16 23:59:25.500: INFO: Pod "downwardapi-volume-f94cee9e-1661-4e30-8e6b-d2bd41aaee5d" satisfied condition "Succeeded or Failed"
Sep 16 23:59:25.506: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-f94cee9e-1661-4e30-8e6b-d2bd41aaee5d container client-container: <nil>
STEP: delete the pod
Sep 16 23:59:25.528: INFO: Waiting for pod downwardapi-volume-f94cee9e-1661-4e30-8e6b-d2bd41aaee5d to disappear
Sep 16 23:59:25.533: INFO: Pod downwardapi-volume-f94cee9e-1661-4e30-8e6b-d2bd41aaee5d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:25.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2785" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":47,"skipped":749,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 23:59:25.622: INFO: Waiting up to 5m0s for pod "downwardapi-volume-52bdcd59-393e-4ce6-ae2a-42af6f1b068e" in namespace "projected-8061" to be "Succeeded or Failed"
Sep 16 23:59:25.630: INFO: Pod "downwardapi-volume-52bdcd59-393e-4ce6-ae2a-42af6f1b068e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052791ms
Sep 16 23:59:27.634: INFO: Pod "downwardapi-volume-52bdcd59-393e-4ce6-ae2a-42af6f1b068e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012305807s
STEP: Saw pod success
Sep 16 23:59:27.634: INFO: Pod "downwardapi-volume-52bdcd59-393e-4ce6-ae2a-42af6f1b068e" satisfied condition "Succeeded or Failed"
Sep 16 23:59:27.637: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-52bdcd59-393e-4ce6-ae2a-42af6f1b068e container client-container: <nil>
STEP: delete the pod
Sep 16 23:59:27.659: INFO: Waiting for pod downwardapi-volume-52bdcd59-393e-4ce6-ae2a-42af6f1b068e to disappear
Sep 16 23:59:27.663: INFO: Pod downwardapi-volume-52bdcd59-393e-4ce6-ae2a-42af6f1b068e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:27.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8061" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":48,"skipped":768,"failed":0}
SSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a volume subpath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 16 23:59:27.671: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a volume subpath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in volume subpath
Sep 16 23:59:27.723: INFO: Waiting up to 5m0s for pod "var-expansion-94e74e69-549e-4581-a971-63ddee6a73e6" in namespace "var-expansion-7084" to be "Succeeded or Failed"
Sep 16 23:59:27.737: INFO: Pod "var-expansion-94e74e69-549e-4581-a971-63ddee6a73e6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.082173ms
Sep 16 23:59:29.743: INFO: Pod "var-expansion-94e74e69-549e-4581-a971-63ddee6a73e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019428455s
STEP: Saw pod success
Sep 16 23:59:29.743: INFO: Pod "var-expansion-94e74e69-549e-4581-a971-63ddee6a73e6" satisfied condition "Succeeded or Failed"
Sep 16 23:59:29.746: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod var-expansion-94e74e69-549e-4581-a971-63ddee6a73e6 container dapi-container: <nil>
STEP: delete the pod
Sep 16 23:59:29.766: INFO: Waiting for pod var-expansion-94e74e69-549e-4581-a971-63ddee6a73e6 to disappear
Sep 16 23:59:29.770: INFO: Pod var-expansion-94e74e69-549e-4581-a971-63ddee6a73e6 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:29.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7084" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":49,"skipped":777,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 13 lines ...
Sep 16 23:59:32.882: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:33.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6395" for this suite.
•{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":50,"skipped":795,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl server-side dry-run 
  should check if kubectl can dry-run update Pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 19 lines ...
Sep 16 23:59:36.491: INFO: stderr: ""
Sep 16 23:59:36.491: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:36.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6465" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":51,"skipped":837,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 16 23:59:36.543: INFO: Asynchronously running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-3733 proxy --unix-socket=/tmp/kubectl-proxy-unix4073214934/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:36.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3733" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":346,"completed":52,"skipped":870,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 16 23:59:36.654: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9dd06fbe-4c8f-4bd7-9d18-33e468c8b01d" in namespace "downward-api-1261" to be "Succeeded or Failed"
Sep 16 23:59:36.673: INFO: Pod "downwardapi-volume-9dd06fbe-4c8f-4bd7-9d18-33e468c8b01d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.118333ms
Sep 16 23:59:38.684: INFO: Pod "downwardapi-volume-9dd06fbe-4c8f-4bd7-9d18-33e468c8b01d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.030304842s
STEP: Saw pod success
Sep 16 23:59:38.684: INFO: Pod "downwardapi-volume-9dd06fbe-4c8f-4bd7-9d18-33e468c8b01d" satisfied condition "Succeeded or Failed"
Sep 16 23:59:38.687: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-9dd06fbe-4c8f-4bd7-9d18-33e468c8b01d container client-container: <nil>
STEP: delete the pod
Sep 16 23:59:38.708: INFO: Waiting for pod downwardapi-volume-9dd06fbe-4c8f-4bd7-9d18-33e468c8b01d to disappear
Sep 16 23:59:38.711: INFO: Pod downwardapi-volume-9dd06fbe-4c8f-4bd7-9d18-33e468c8b01d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:38.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1261" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":53,"skipped":872,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 11 lines ...
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:38.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6826" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":54,"skipped":915,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PodTemplates
... skipping 5 lines ...
[It] should run the lifecycle of PodTemplates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:38.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-2110" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":346,"completed":55,"skipped":964,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
• [SLOW TEST:16.233 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":346,"completed":56,"skipped":972,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-67bb369a-a9ee-4d2f-89df-30d9feb5a611
STEP: Creating a pod to test consume secrets
Sep 16 23:59:55.200: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2ca9c4ec-4757-4550-8396-bd9a03f41d33" in namespace "projected-2147" to be "Succeeded or Failed"
Sep 16 23:59:55.207: INFO: Pod "pod-projected-secrets-2ca9c4ec-4757-4550-8396-bd9a03f41d33": Phase="Pending", Reason="", readiness=false. Elapsed: 6.818183ms
Sep 16 23:59:57.212: INFO: Pod "pod-projected-secrets-2ca9c4ec-4757-4550-8396-bd9a03f41d33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011884464s
STEP: Saw pod success
Sep 16 23:59:57.212: INFO: Pod "pod-projected-secrets-2ca9c4ec-4757-4550-8396-bd9a03f41d33" satisfied condition "Succeeded or Failed"
Sep 16 23:59:57.216: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-secrets-2ca9c4ec-4757-4550-8396-bd9a03f41d33 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 16 23:59:57.237: INFO: Waiting for pod pod-projected-secrets-2ca9c4ec-4757-4550-8396-bd9a03f41d33 to disappear
Sep 16 23:59:57.243: INFO: Pod pod-projected-secrets-2ca9c4ec-4757-4550-8396-bd9a03f41d33 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:57.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2147" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":57,"skipped":991,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 16 23:59:57.255: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 16 23:59:57.313: INFO: Waiting up to 5m0s for pod "pod-2b128b94-9bdd-4afd-84bc-30a6b92f5abf" in namespace "emptydir-504" to be "Succeeded or Failed"
Sep 16 23:59:57.332: INFO: Pod "pod-2b128b94-9bdd-4afd-84bc-30a6b92f5abf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.616287ms
Sep 16 23:59:59.340: INFO: Pod "pod-2b128b94-9bdd-4afd-84bc-30a6b92f5abf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.026815999s
STEP: Saw pod success
Sep 16 23:59:59.340: INFO: Pod "pod-2b128b94-9bdd-4afd-84bc-30a6b92f5abf" satisfied condition "Succeeded or Failed"
Sep 16 23:59:59.345: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-2b128b94-9bdd-4afd-84bc-30a6b92f5abf container test-container: <nil>
STEP: delete the pod
Sep 16 23:59:59.374: INFO: Waiting for pod pod-2b128b94-9bdd-4afd-84bc-30a6b92f5abf to disappear
Sep 16 23:59:59.378: INFO: Pod pod-2b128b94-9bdd-4afd-84bc-30a6b92f5abf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 16 23:59:59.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-504" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":58,"skipped":1019,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 113 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":59,"skipped":1024,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 00:01:01.885: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 17 00:01:01.940: INFO: Waiting up to 5m0s for pod "pod-cb711dda-3562-4ac3-ad7e-109cef17bd77" in namespace "emptydir-6890" to be "Succeeded or Failed"
Sep 17 00:01:01.946: INFO: Pod "pod-cb711dda-3562-4ac3-ad7e-109cef17bd77": Phase="Pending", Reason="", readiness=false. Elapsed: 6.503583ms
Sep 17 00:01:03.951: INFO: Pod "pod-cb711dda-3562-4ac3-ad7e-109cef17bd77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010947676s
STEP: Saw pod success
Sep 17 00:01:03.951: INFO: Pod "pod-cb711dda-3562-4ac3-ad7e-109cef17bd77" satisfied condition "Succeeded or Failed"
Sep 17 00:01:03.954: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-cb711dda-3562-4ac3-ad7e-109cef17bd77 container test-container: <nil>
STEP: delete the pod
Sep 17 00:01:03.982: INFO: Waiting for pod pod-cb711dda-3562-4ac3-ad7e-109cef17bd77 to disappear
Sep 17 00:01:03.986: INFO: Pod pod-cb711dda-3562-4ac3-ad7e-109cef17bd77 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:01:03.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6890" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":60,"skipped":1032,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 59 lines ...
• [SLOW TEST:10.856 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":61,"skipped":1066,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] PodTemplates 
  should delete a collection of pod templates [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PodTemplates
... skipping 14 lines ...
STEP: check that the list of pod templates matches the requested quantity
Sep 17 00:01:14.920: INFO: requesting list of pod templates to confirm quantity
[AfterEach] [sig-node] PodTemplates
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:01:14.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "podtemplate-9752" for this suite.
•{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":62,"skipped":1083,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 35 lines ...
• [SLOW TEST:14.314 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":63,"skipped":1129,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 160 lines ...
Sep 17 00:01:30.269: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-8260 create -f -'
Sep 17 00:01:30.454: INFO: stderr: ""
Sep 17 00:01:30.454: INFO: stdout: "deployment.apps/agnhost-replica created\n"
STEP: validating guestbook app
Sep 17 00:01:30.454: INFO: Waiting for all frontend pods to be Running.
Sep 17 00:01:35.507: INFO: Waiting for frontend to serve content.
Sep 17 00:01:40.529: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: 
Sep 17 00:01:45.555: INFO: Trying to add a new entry to the guestbook.
Sep 17 00:01:45.588: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Sep 17 00:01:45.606: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-8260 delete --grace-period=0 --force -f -'
Sep 17 00:01:45.745: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Sep 17 00:01:45.745: INFO: stdout: "service \"agnhost-replica\" force deleted\n"
... skipping 27 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:339
    should create and stop a working application  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":346,"completed":64,"skipped":1142,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 19 lines ...
• [SLOW TEST:10.189 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":346,"completed":65,"skipped":1153,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 10 lines ...
Sep 17 00:01:56.632: INFO: Asynchronously running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=kubectl-6685 proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:01:56.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6685" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":346,"completed":66,"skipped":1163,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 16 lines ...
• [SLOW TEST:16.735 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":67,"skipped":1178,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] server version 
  should find the server version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] server version
... skipping 11 lines ...
Sep 17 00:02:13.480: INFO: cleanMinorVersion: 23
Sep 17 00:02:13.480: INFO: Minor version: 23+
[AfterEach] [sig-api-machinery] server version
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:02:13.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "server-version-3156" for this suite.
•{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":68,"skipped":1202,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:02:13.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13089fea-9138-4bcf-8c77-aa767c2ede44" in namespace "projected-8338" to be "Succeeded or Failed"
Sep 17 00:02:13.546: INFO: Pod "downwardapi-volume-13089fea-9138-4bcf-8c77-aa767c2ede44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.742064ms
Sep 17 00:02:15.551: INFO: Pod "downwardapi-volume-13089fea-9138-4bcf-8c77-aa767c2ede44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011924118s
STEP: Saw pod success
Sep 17 00:02:15.551: INFO: Pod "downwardapi-volume-13089fea-9138-4bcf-8c77-aa767c2ede44" satisfied condition "Succeeded or Failed"
Sep 17 00:02:15.554: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-13089fea-9138-4bcf-8c77-aa767c2ede44 container client-container: <nil>
STEP: delete the pod
Sep 17 00:02:15.589: INFO: Waiting for pod downwardapi-volume-13089fea-9138-4bcf-8c77-aa767c2ede44 to disappear
Sep 17 00:02:15.594: INFO: Pod downwardapi-volume-13089fea-9138-4bcf-8c77-aa767c2ede44 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:02:15.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8338" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":69,"skipped":1226,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:02:15.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35ee7f76-80de-42e8-b4bd-9c6d8c5ac4a0" in namespace "downward-api-8617" to be "Succeeded or Failed"
Sep 17 00:02:15.686: INFO: Pod "downwardapi-volume-35ee7f76-80de-42e8-b4bd-9c6d8c5ac4a0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.564723ms
Sep 17 00:02:17.691: INFO: Pod "downwardapi-volume-35ee7f76-80de-42e8-b4bd-9c6d8c5ac4a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023139334s
STEP: Saw pod success
Sep 17 00:02:17.691: INFO: Pod "downwardapi-volume-35ee7f76-80de-42e8-b4bd-9c6d8c5ac4a0" satisfied condition "Succeeded or Failed"
Sep 17 00:02:17.694: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-35ee7f76-80de-42e8-b4bd-9c6d8c5ac4a0 container client-container: <nil>
STEP: delete the pod
Sep 17 00:02:17.712: INFO: Waiting for pod downwardapi-volume-35ee7f76-80de-42e8-b4bd-9c6d8c5ac4a0 to disappear
Sep 17 00:02:17.717: INFO: Pod downwardapi-volume-35ee7f76-80de-42e8-b4bd-9c6d8c5ac4a0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:02:17.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8617" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":70,"skipped":1233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 17 00:02:17.726: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override command
Sep 17 00:02:17.795: INFO: Waiting up to 5m0s for pod "client-containers-c926a832-91b1-4a71-93ae-99f9895e4961" in namespace "containers-3088" to be "Succeeded or Failed"
Sep 17 00:02:17.799: INFO: Pod "client-containers-c926a832-91b1-4a71-93ae-99f9895e4961": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25172ms
Sep 17 00:02:19.804: INFO: Pod "client-containers-c926a832-91b1-4a71-93ae-99f9895e4961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00894804s
STEP: Saw pod success
Sep 17 00:02:19.804: INFO: Pod "client-containers-c926a832-91b1-4a71-93ae-99f9895e4961" satisfied condition "Succeeded or Failed"
Sep 17 00:02:19.806: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod client-containers-c926a832-91b1-4a71-93ae-99f9895e4961 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:02:19.825: INFO: Waiting for pod client-containers-c926a832-91b1-4a71-93ae-99f9895e4961 to disappear
Sep 17 00:02:19.830: INFO: Pod client-containers-c926a832-91b1-4a71-93ae-99f9895e4961 no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:02:19.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3088" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":346,"completed":71,"skipped":1256,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 00:02:19.839: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 00:02:19.886: INFO: Waiting up to 5m0s for pod "downward-api-24a1e58f-5e09-4820-a4ad-a87287826659" in namespace "downward-api-3646" to be "Succeeded or Failed"
Sep 17 00:02:19.894: INFO: Pod "downward-api-24a1e58f-5e09-4820-a4ad-a87287826659": Phase="Pending", Reason="", readiness=false. Elapsed: 7.424121ms
Sep 17 00:02:21.898: INFO: Pod "downward-api-24a1e58f-5e09-4820-a4ad-a87287826659": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011889472s
STEP: Saw pod success
Sep 17 00:02:21.898: INFO: Pod "downward-api-24a1e58f-5e09-4820-a4ad-a87287826659" satisfied condition "Succeeded or Failed"
Sep 17 00:02:21.901: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downward-api-24a1e58f-5e09-4820-a4ad-a87287826659 container dapi-container: <nil>
STEP: delete the pod
Sep 17 00:02:21.920: INFO: Waiting for pod downward-api-24a1e58f-5e09-4820-a4ad-a87287826659 to disappear
Sep 17 00:02:21.924: INFO: Pod downward-api-24a1e58f-5e09-4820-a4ad-a87287826659 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:02:21.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3646" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":346,"completed":72,"skipped":1293,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 27 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":346,"completed":73,"skipped":1305,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-1545/configmap-test-8b2673ee-2801-4a6e-b767-ea24e5f448b7
STEP: Creating a pod to test consume configMaps
Sep 17 00:03:26.816: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3" in namespace "configmap-1545" to be "Succeeded or Failed"
Sep 17 00:03:26.937: INFO: Pod "pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 121.354084ms
Sep 17 00:03:28.941: INFO: Pod "pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125221725s
Sep 17 00:03:30.945: INFO: Pod "pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129677587s
STEP: Saw pod success
Sep 17 00:03:30.945: INFO: Pod "pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3" satisfied condition "Succeeded or Failed"
Sep 17 00:03:30.950: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3 container env-test: <nil>
STEP: delete the pod
Sep 17 00:03:30.978: INFO: Waiting for pod pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3 to disappear
Sep 17 00:03:30.982: INFO: Pod pod-configmaps-2d8dec8d-da0f-4b7b-9239-81e528ddd9c3 no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 3 lines ...
• [SLOW TEST:5.581 seconds]
[sig-node] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be consumable via the environment [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":346,"completed":74,"skipped":1317,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 25 lines ...
Sep 17 00:03:33.352: INFO: Pod "test-recreate-deployment-785fd889-cm6mv" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-785fd889-cm6mv test-recreate-deployment-785fd889- deployment-2233  a6ce814a-8879-43b3-8060-84a0d0353647 6466 0 2021-09-17 00:03:33 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:785fd889] map[] [{apps/v1 ReplicaSet test-recreate-deployment-785fd889 784228d3-9fc0-4409-abe2-aa6080bffca3 0xc00598d08f 0xc00598d0a0}] []  [{kube-controller-manager Update v1 2021-09-17 00:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"784228d3-9fc0-4409-abe2-aa6080bffca3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 00:03:33 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wnbr2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wnbr2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-xp78,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 00:03:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 00:03:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 00:03:33 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 00:03:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:,StartTime:2021-09-17 00:03:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:03:33.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2233" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":75,"skipped":1386,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should test the lifecycle of a ReplicationController [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 26 lines ...
STEP: deleting ReplicationControllers by collection
STEP: waiting for ReplicationController to have a DELETED watchEvent
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:03:36.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-1883" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":346,"completed":76,"skipped":1400,"failed":0}
SSSSSSS
------------------------------
[sig-node] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 00:03:36.462: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name secret-emptykey-test-9a1299b9-86f4-4363-970a-6783e185d156
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:03:36.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1142" for this suite.
•{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":77,"skipped":1407,"failed":0}
S
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-dd454191-9a99-434e-a6f6-e133a79da139
STEP: Creating a pod to test consume configMaps
Sep 17 00:03:36.964: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e422e5c7-bd12-4828-bc5d-0a1d78f96d09" in namespace "projected-9971" to be "Succeeded or Failed"
Sep 17 00:03:36.973: INFO: Pod "pod-projected-configmaps-e422e5c7-bd12-4828-bc5d-0a1d78f96d09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.515555ms
Sep 17 00:03:38.977: INFO: Pod "pod-projected-configmaps-e422e5c7-bd12-4828-bc5d-0a1d78f96d09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012999568s
STEP: Saw pod success
Sep 17 00:03:38.977: INFO: Pod "pod-projected-configmaps-e422e5c7-bd12-4828-bc5d-0a1d78f96d09" satisfied condition "Succeeded or Failed"
Sep 17 00:03:38.981: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod pod-projected-configmaps-e422e5c7-bd12-4828-bc5d-0a1d78f96d09 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:03:39.026: INFO: Waiting for pod pod-projected-configmaps-e422e5c7-bd12-4828-bc5d-0a1d78f96d09 to disappear
Sep 17 00:03:39.030: INFO: Pod pod-projected-configmaps-e422e5c7-bd12-4828-bc5d-0a1d78f96d09 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:03:39.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9971" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":78,"skipped":1408,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should list, patch and delete a collection of StatefulSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 31 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should list, patch and delete a collection of StatefulSets [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":79,"skipped":1432,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 24 lines ...
Sep 17 00:04:00.099: INFO: created pod pod-service-account-nomountsa-nomountspec
Sep 17 00:04:00.099: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:04:00.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7538" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":346,"completed":80,"skipped":1459,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-9tmb
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 00:04:00.355: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9tmb" in namespace "subpath-6214" to be "Succeeded or Failed"
Sep 17 00:04:00.365: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.585805ms
Sep 17 00:04:02.369: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013860599s
Sep 17 00:04:04.405: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 4.049025857s
Sep 17 00:04:06.412: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 6.056729895s
Sep 17 00:04:08.417: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 8.061133533s
Sep 17 00:04:10.423: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 10.067160644s
Sep 17 00:04:12.428: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 12.072137574s
Sep 17 00:04:14.434: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 14.07848848s
Sep 17 00:04:16.442: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 16.086426429s
Sep 17 00:04:18.447: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 18.091726474s
Sep 17 00:04:20.452: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Running", Reason="", readiness=true. Elapsed: 20.096893062s
Sep 17 00:04:22.456: INFO: Pod "pod-subpath-test-configmap-9tmb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.10073647s
STEP: Saw pod success
Sep 17 00:04:22.456: INFO: Pod "pod-subpath-test-configmap-9tmb" satisfied condition "Succeeded or Failed"
Sep 17 00:04:22.459: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-subpath-test-configmap-9tmb container test-container-subpath-configmap-9tmb: <nil>
STEP: delete the pod
Sep 17 00:04:22.478: INFO: Waiting for pod pod-subpath-test-configmap-9tmb to disappear
Sep 17 00:04:22.481: INFO: Pod pod-subpath-test-configmap-9tmb no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9tmb
Sep 17 00:04:22.481: INFO: Deleting pod "pod-subpath-test-configmap-9tmb" in namespace "subpath-6214"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":346,"completed":81,"skipped":1461,"failed":0}
SSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:04:24.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5333" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":346,"completed":82,"skipped":1468,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-c70de8d4-3f4a-4e4c-8341-3d5ca1079885
STEP: Creating a pod to test consume configMaps
Sep 17 00:04:24.757: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-19e77252-4896-49de-8232-721c07c3c43a" in namespace "projected-6587" to be "Succeeded or Failed"
Sep 17 00:04:24.764: INFO: Pod "pod-projected-configmaps-19e77252-4896-49de-8232-721c07c3c43a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.905763ms
Sep 17 00:04:26.768: INFO: Pod "pod-projected-configmaps-19e77252-4896-49de-8232-721c07c3c43a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010645744s
STEP: Saw pod success
Sep 17 00:04:26.768: INFO: Pod "pod-projected-configmaps-19e77252-4896-49de-8232-721c07c3c43a" satisfied condition "Succeeded or Failed"
Sep 17 00:04:26.771: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-configmaps-19e77252-4896-49de-8232-721c07c3c43a container projected-configmap-volume-test: <nil>
STEP: delete the pod
Sep 17 00:04:26.798: INFO: Waiting for pod pod-projected-configmaps-19e77252-4896-49de-8232-721c07c3c43a to disappear
Sep 17 00:04:26.801: INFO: Pod pod-projected-configmaps-19e77252-4896-49de-8232-721c07c3c43a no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:04:26.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6587" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":83,"skipped":1481,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints 
  verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 36 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PriorityClass endpoints
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673
    verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":84,"skipped":1487,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
Sep 17 00:05:27.113: INFO: The status of Pod pod-exec-websocket-b19081c5-de5b-43b9-8c76-5c5e5c037188 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 00:05:29.117: INFO: The status of Pod pod-exec-websocket-b19081c5-de5b-43b9-8c76-5c5e5c037188 is Running (Ready = true)
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:05:29.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8232" for this suite.
•{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":85,"skipped":1501,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context 
  should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 2 lines ...
Sep 17 00:05:29.204: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 17 00:05:29.275: INFO: Waiting up to 5m0s for pod "security-context-4a568114-9cb1-4fa4-8f31-0b6148b4129b" in namespace "security-context-262" to be "Succeeded or Failed"
Sep 17 00:05:29.280: INFO: Pod "security-context-4a568114-9cb1-4fa4-8f31-0b6148b4129b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.432017ms
Sep 17 00:05:31.286: INFO: Pod "security-context-4a568114-9cb1-4fa4-8f31-0b6148b4129b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010915471s
STEP: Saw pod success
Sep 17 00:05:31.286: INFO: Pod "security-context-4a568114-9cb1-4fa4-8f31-0b6148b4129b" satisfied condition "Succeeded or Failed"
Sep 17 00:05:31.290: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod security-context-4a568114-9cb1-4fa4-8f31-0b6148b4129b container test-container: <nil>
STEP: delete the pod
Sep 17 00:05:31.337: INFO: Waiting for pod security-context-4a568114-9cb1-4fa4-8f31-0b6148b4129b to disappear
Sep 17 00:05:31.342: INFO: Pod security-context-4a568114-9cb1-4fa4-8f31-0b6148b4129b no longer exists
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:05:31.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-262" for this suite.
•{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":86,"skipped":1629,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces 
  should list and delete a collection of PodDisruptionBudgets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 24 lines ...
Sep 17 00:05:33.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-2-2538" for this suite.
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:05:33.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1372" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":87,"skipped":1691,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 12 lines ...
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:05:36.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5181" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":346,"completed":88,"skipped":1734,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 9 lines ...
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:05:36.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9710" for this suite.
STEP: Destroying namespace "nspatchtest-9d43ecea-12b0-482a-a66d-37428a57c139-9353" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":346,"completed":89,"skipped":1738,"failed":0}
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 78 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Trying to schedule Pod with nonempty NodeSelector.
I0917 00:07:01.629791    2918 boskos.go:86] Sending heartbeat to Boskos
I0917 00:12:01.659415    2918 boskos.go:86] Sending heartbeat to Boskos
Sep 17 00:15:38.086: INFO: Timed out waiting for the following pods to schedule
Sep 17 00:15:38.086: INFO: kube-system/konnectivity-agent-ngzhd
Sep 17 00:15:38.087: FAIL: Timed out after 10m0s waiting for stable cluster.

Full Stack Trace
k8s.io/kubernetes/test/e2e/scheduling.glob..func4.6()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436 +0x85
k8s.io/kubernetes/test/e2e.RunE2ETests(0x229aa57)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:128 +0x697
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 17 00:15:38.087: Timed out after 10m0s waiting for stable cluster.

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436
------------------------------
{"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":346,"completed":89,"skipped":1740,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] HostPort 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] HostPort
... skipping 34 lines ...
• [SLOW TEST:13.583 seconds]
[sig-network] HostPort
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":346,"completed":90,"skipped":1777,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 28 lines ...
• [SLOW TEST:70.401 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates basic preemption works [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":91,"skipped":1796,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
• [SLOW TEST:8.765 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":92,"skipped":1799,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
[It] should be possible to delete [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:17:11.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9051" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":93,"skipped":1824,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 50 lines ...
• [SLOW TEST:13.219 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":346,"completed":94,"skipped":1825,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 00:17:24.836: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on tmpfs
Sep 17 00:17:24.882: INFO: Waiting up to 5m0s for pod "pod-3ba746df-2b27-4d93-99f1-c026f1b0caec" in namespace "emptydir-4917" to be "Succeeded or Failed"
Sep 17 00:17:24.890: INFO: Pod "pod-3ba746df-2b27-4d93-99f1-c026f1b0caec": Phase="Pending", Reason="", readiness=false. Elapsed: 7.903243ms
Sep 17 00:17:26.898: INFO: Pod "pod-3ba746df-2b27-4d93-99f1-c026f1b0caec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015388937s
STEP: Saw pod success
Sep 17 00:17:26.898: INFO: Pod "pod-3ba746df-2b27-4d93-99f1-c026f1b0caec" satisfied condition "Succeeded or Failed"
Sep 17 00:17:26.901: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod pod-3ba746df-2b27-4d93-99f1-c026f1b0caec container test-container: <nil>
STEP: delete the pod
Sep 17 00:17:26.942: INFO: Waiting for pod pod-3ba746df-2b27-4d93-99f1-c026f1b0caec to disappear
Sep 17 00:17:26.945: INFO: Pod pod-3ba746df-2b27-4d93-99f1-c026f1b0caec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:17:26.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4917" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":95,"skipped":1831,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-apps] DisruptionController 
  should observe PodDisruptionBudget status updated [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 18 lines ...
• [SLOW TEST:6.234 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should observe PodDisruptionBudget status updated [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":96,"skipped":1841,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] PreStop
... skipping 32 lines ...
• [SLOW TEST:9.201 seconds]
[sig-node] PreStop
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should call prestop when killing a pod  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":346,"completed":97,"skipped":1854,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 21 lines ...
• [SLOW TEST:11.341 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":346,"completed":98,"skipped":1856,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 4 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-downwardapi-khml
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 00:17:53.804: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-khml" in namespace "subpath-9427" to be "Succeeded or Failed"
Sep 17 00:17:53.809: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Pending", Reason="", readiness=false. Elapsed: 5.044054ms
Sep 17 00:17:55.814: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 2.009980785s
Sep 17 00:17:57.818: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 4.014331026s
Sep 17 00:17:59.823: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 6.018946059s
Sep 17 00:18:01.827: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 8.023071756s
Sep 17 00:18:03.831: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 10.026607212s
... skipping 2 lines ...
Sep 17 00:18:09.845: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 16.040865242s
Sep 17 00:18:11.850: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 18.045993755s
Sep 17 00:18:13.854: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 20.049748958s
Sep 17 00:18:15.860: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Running", Reason="", readiness=true. Elapsed: 22.055759362s
Sep 17 00:18:17.865: INFO: Pod "pod-subpath-test-downwardapi-khml": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.060567182s
STEP: Saw pod success
Sep 17 00:18:17.865: INFO: Pod "pod-subpath-test-downwardapi-khml" satisfied condition "Succeeded or Failed"
Sep 17 00:18:17.868: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod pod-subpath-test-downwardapi-khml container test-container-subpath-downwardapi-khml: <nil>
STEP: delete the pod
Sep 17 00:18:17.893: INFO: Waiting for pod pod-subpath-test-downwardapi-khml to disappear
Sep 17 00:18:17.899: INFO: Pod pod-subpath-test-downwardapi-khml no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-khml
Sep 17 00:18:17.899: INFO: Deleting pod "pod-subpath-test-downwardapi-khml" in namespace "subpath-9427"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":346,"completed":99,"skipped":1856,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
S
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:18:20.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9311" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":346,"completed":100,"skipped":1857,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:18:23.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6776" for this suite.
STEP: Destroying namespace "webhook-6776-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":346,"completed":101,"skipped":1903,"failed":1,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]"]}
SSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 63 lines ...
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0917 00:22:01.705360    2918 boskos.go:86] Sending heartbeat to Boskos
I0917 00:27:01.721751    2918 boskos.go:86] Sending heartbeat to Boskos
Sep 17 00:28:24.979: INFO: Timed out waiting for the following pods to schedule
Sep 17 00:28:24.979: INFO: kube-system/konnectivity-agent-ngzhd
Sep 17 00:28:24.979: FAIL: Timed out after 10m0s waiting for stable cluster.

Full Stack Trace
k8s.io/kubernetes/test/e2e/scheduling.glob..func4.5()
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323 +0x8b
k8s.io/kubernetes/test/e2e.RunE2ETests(0x229aa57)
	_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:128 +0x697
... skipping 127 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630

  Sep 17 00:28:24.980: Timed out after 10m0s waiting for stable cluster.

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323
------------------------------
{"msg":"FAILED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":346,"completed":101,"skipped":1906,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 00:28:25.618: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 00:28:25.699: INFO: Waiting up to 5m0s for pod "downward-api-2504dfb3-4013-487f-9dd8-776795e818d7" in namespace "downward-api-8808" to be "Succeeded or Failed"
Sep 17 00:28:25.706: INFO: Pod "downward-api-2504dfb3-4013-487f-9dd8-776795e818d7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.055523ms
Sep 17 00:28:27.711: INFO: Pod "downward-api-2504dfb3-4013-487f-9dd8-776795e818d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012111574s
STEP: Saw pod success
Sep 17 00:28:27.711: INFO: Pod "downward-api-2504dfb3-4013-487f-9dd8-776795e818d7" satisfied condition "Succeeded or Failed"
Sep 17 00:28:27.714: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downward-api-2504dfb3-4013-487f-9dd8-776795e818d7 container dapi-container: <nil>
STEP: delete the pod
Sep 17 00:28:27.731: INFO: Waiting for pod downward-api-2504dfb3-4013-487f-9dd8-776795e818d7 to disappear
Sep 17 00:28:27.736: INFO: Pod downward-api-2504dfb3-4013-487f-9dd8-776795e818d7 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:27.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8808" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":346,"completed":102,"skipped":1930,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Secrets 
  should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 5 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:27.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-376" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":103,"skipped":1934,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-8cf653bf-757b-4dfd-8d3a-8d2ab188d865
STEP: Creating a pod to test consume secrets
Sep 17 00:28:27.884: INFO: Waiting up to 5m0s for pod "pod-secrets-c7caff60-5585-43d0-8a87-cb2abd419597" in namespace "secrets-7757" to be "Succeeded or Failed"
Sep 17 00:28:27.890: INFO: Pod "pod-secrets-c7caff60-5585-43d0-8a87-cb2abd419597": Phase="Pending", Reason="", readiness=false. Elapsed: 5.497657ms
Sep 17 00:28:29.895: INFO: Pod "pod-secrets-c7caff60-5585-43d0-8a87-cb2abd419597": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010323188s
STEP: Saw pod success
Sep 17 00:28:29.895: INFO: Pod "pod-secrets-c7caff60-5585-43d0-8a87-cb2abd419597" satisfied condition "Succeeded or Failed"
Sep 17 00:28:29.897: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-secrets-c7caff60-5585-43d0-8a87-cb2abd419597 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 00:28:29.916: INFO: Waiting for pod pod-secrets-c7caff60-5585-43d0-8a87-cb2abd419597 to disappear
Sep 17 00:28:29.919: INFO: Pod pod-secrets-c7caff60-5585-43d0-8a87-cb2abd419597 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:29.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7757" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":104,"skipped":1966,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-7421b4f3-d4c4-42fc-8d55-96f56544063e
STEP: Creating a pod to test consume secrets
Sep 17 00:28:29.977: INFO: Waiting up to 5m0s for pod "pod-secrets-b2acaa8d-7f37-4119-abd6-d2c28f200b7d" in namespace "secrets-8113" to be "Succeeded or Failed"
Sep 17 00:28:29.983: INFO: Pod "pod-secrets-b2acaa8d-7f37-4119-abd6-d2c28f200b7d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.205557ms
Sep 17 00:28:31.987: INFO: Pod "pod-secrets-b2acaa8d-7f37-4119-abd6-d2c28f200b7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009087375s
STEP: Saw pod success
Sep 17 00:28:31.987: INFO: Pod "pod-secrets-b2acaa8d-7f37-4119-abd6-d2c28f200b7d" satisfied condition "Succeeded or Failed"
Sep 17 00:28:31.989: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-secrets-b2acaa8d-7f37-4119-abd6-d2c28f200b7d container secret-env-test: <nil>
STEP: delete the pod
Sep 17 00:28:32.005: INFO: Waiting for pod pod-secrets-b2acaa8d-7f37-4119-abd6-d2c28f200b7d to disappear
Sep 17 00:28:32.010: INFO: Pod pod-secrets-b2acaa8d-7f37-4119-abd6-d2c28f200b7d no longer exists
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:32.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8113" for this suite.
•{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":346,"completed":105,"skipped":1987,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 17 00:28:32.018: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override all
Sep 17 00:28:32.070: INFO: Waiting up to 5m0s for pod "client-containers-981245f3-5cf6-46c0-b7e0-40a42a97e52e" in namespace "containers-6141" to be "Succeeded or Failed"
Sep 17 00:28:32.075: INFO: Pod "client-containers-981245f3-5cf6-46c0-b7e0-40a42a97e52e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.130504ms
Sep 17 00:28:34.079: INFO: Pod "client-containers-981245f3-5cf6-46c0-b7e0-40a42a97e52e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008871625s
STEP: Saw pod success
Sep 17 00:28:34.079: INFO: Pod "client-containers-981245f3-5cf6-46c0-b7e0-40a42a97e52e" satisfied condition "Succeeded or Failed"
Sep 17 00:28:34.081: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod client-containers-981245f3-5cf6-46c0-b7e0-40a42a97e52e container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:28:34.096: INFO: Waiting for pod client-containers-981245f3-5cf6-46c0-b7e0-40a42a97e52e to disappear
Sep 17 00:28:34.102: INFO: Pod client-containers-981245f3-5cf6-46c0-b7e0-40a42a97e52e no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:34.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6141" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":106,"skipped":2051,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:28:34.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5089eb15-5428-4ffa-ac5c-eb4b127b66a5" in namespace "downward-api-3027" to be "Succeeded or Failed"
Sep 17 00:28:34.165: INFO: Pod "downwardapi-volume-5089eb15-5428-4ffa-ac5c-eb4b127b66a5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.314551ms
Sep 17 00:28:36.173: INFO: Pod "downwardapi-volume-5089eb15-5428-4ffa-ac5c-eb4b127b66a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012769435s
STEP: Saw pod success
Sep 17 00:28:36.173: INFO: Pod "downwardapi-volume-5089eb15-5428-4ffa-ac5c-eb4b127b66a5" satisfied condition "Succeeded or Failed"
Sep 17 00:28:36.178: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-5089eb15-5428-4ffa-ac5c-eb4b127b66a5 container client-container: <nil>
STEP: delete the pod
Sep 17 00:28:36.203: INFO: Waiting for pod downwardapi-volume-5089eb15-5428-4ffa-ac5c-eb4b127b66a5 to disappear
Sep 17 00:28:36.209: INFO: Pod downwardapi-volume-5089eb15-5428-4ffa-ac5c-eb4b127b66a5 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:36.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3027" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":107,"skipped":2052,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 29 lines ...
• [SLOW TEST:5.277 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":108,"skipped":2083,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-d820256b-ce6e-42cb-9152-dba7cb2134a6
STEP: Creating a pod to test consume configMaps
Sep 17 00:28:41.679: INFO: Waiting up to 5m0s for pod "pod-configmaps-ddb21513-a032-4713-8d2a-6d9e379c00bb" in namespace "configmap-8500" to be "Succeeded or Failed"
Sep 17 00:28:41.691: INFO: Pod "pod-configmaps-ddb21513-a032-4713-8d2a-6d9e379c00bb": Phase="Pending", Reason="", readiness=false. Elapsed: 11.954595ms
Sep 17 00:28:43.695: INFO: Pod "pod-configmaps-ddb21513-a032-4713-8d2a-6d9e379c00bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015918118s
STEP: Saw pod success
Sep 17 00:28:43.695: INFO: Pod "pod-configmaps-ddb21513-a032-4713-8d2a-6d9e379c00bb" satisfied condition "Succeeded or Failed"
Sep 17 00:28:43.702: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-ddb21513-a032-4713-8d2a-6d9e379c00bb container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:28:43.740: INFO: Waiting for pod pod-configmaps-ddb21513-a032-4713-8d2a-6d9e379c00bb to disappear
Sep 17 00:28:43.751: INFO: Pod pod-configmaps-ddb21513-a032-4713-8d2a-6d9e379c00bb no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:43.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8500" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":109,"skipped":2085,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:28:44.000: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41d9be66-c25e-48e6-aaf9-37184a615fd3" in namespace "downward-api-665" to be "Succeeded or Failed"
Sep 17 00:28:44.006: INFO: Pod "downwardapi-volume-41d9be66-c25e-48e6-aaf9-37184a615fd3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.47595ms
Sep 17 00:28:46.012: INFO: Pod "downwardapi-volume-41d9be66-c25e-48e6-aaf9-37184a615fd3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011231437s
STEP: Saw pod success
Sep 17 00:28:46.012: INFO: Pod "downwardapi-volume-41d9be66-c25e-48e6-aaf9-37184a615fd3" satisfied condition "Succeeded or Failed"
Sep 17 00:28:46.016: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-41d9be66-c25e-48e6-aaf9-37184a615fd3 container client-container: <nil>
STEP: delete the pod
Sep 17 00:28:46.054: INFO: Waiting for pod downwardapi-volume-41d9be66-c25e-48e6-aaf9-37184a615fd3 to disappear
Sep 17 00:28:46.062: INFO: Pod downwardapi-volume-41d9be66-c25e-48e6-aaf9-37184a615fd3 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:46.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-665" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":2140,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 17 lines ...
Sep 17 00:28:46.214: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6965  155a6c40-5b46-4daa-8c89-bd3e5274caa9 10622 0 2021-09-17 00:28:46 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-09-17 00:28:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Sep 17 00:28:46.215: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-6965  155a6c40-5b46-4daa-8c89-bd3e5274caa9 10623 0 2021-09-17 00:28:46 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-09-17 00:28:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:28:46.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6965" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":111,"skipped":2145,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 51 lines ...
• [SLOW TEST:40.671 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":112,"skipped":2151,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes control plane services is included in cluster-info  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Sep 17 00:29:27.185: INFO: stderr: ""
Sep 17 00:29:27.185: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://35.222.74.146\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:29:27.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1223" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":346,"completed":113,"skipped":2186,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should support CronJob API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 23 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-apps] CronJob
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:29:27.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "cronjob-8456" for this suite.
•{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":346,"completed":114,"skipped":2247,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-node] RuntimeClass 
   should support RuntimeClasses API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] RuntimeClass
... skipping 18 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-node] RuntimeClass
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:29:27.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "runtimeclass-3473" for this suite.
•{"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":346,"completed":115,"skipped":2250,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 00:29:27.608: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 17 00:29:27.666: INFO: Waiting up to 5m0s for pod "pod-0b9d7fbf-f2b8-4481-a146-1d8bfd2f4477" in namespace "emptydir-355" to be "Succeeded or Failed"
Sep 17 00:29:27.671: INFO: Pod "pod-0b9d7fbf-f2b8-4481-a146-1d8bfd2f4477": Phase="Pending", Reason="", readiness=false. Elapsed: 5.0269ms
Sep 17 00:29:29.675: INFO: Pod "pod-0b9d7fbf-f2b8-4481-a146-1d8bfd2f4477": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009095615s
STEP: Saw pod success
Sep 17 00:29:29.675: INFO: Pod "pod-0b9d7fbf-f2b8-4481-a146-1d8bfd2f4477" satisfied condition "Succeeded or Failed"
Sep 17 00:29:29.678: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-0b9d7fbf-f2b8-4481-a146-1d8bfd2f4477 container test-container: <nil>
STEP: delete the pod
Sep 17 00:29:29.698: INFO: Waiting for pod pod-0b9d7fbf-f2b8-4481-a146-1d8bfd2f4477 to disappear
Sep 17 00:29:29.703: INFO: Pod pod-0b9d7fbf-f2b8-4481-a146-1d8bfd2f4477 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:29:29.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-355" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":116,"skipped":2271,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:29:33.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3452" for this suite.
STEP: Destroying namespace "webhook-3452-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":346,"completed":117,"skipped":2274,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:242.920 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":346,"completed":118,"skipped":2276,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 23 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:33:40.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9872" for this suite.
STEP: Destroying namespace "webhook-9872-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":346,"completed":119,"skipped":2295,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should list and delete a collection of DaemonSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 30 lines ...
Sep 17 00:33:43.810: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"11575"},"items":[{"metadata":{"name":"daemon-set-dcslc","generateName":"daemon-set-","namespace":"daemonsets-420","uid":"47c33632-9bd2-4dad-8d77-2ee0fcac857c","resourceVersion":"11561","creationTimestamp":"2021-09-17T00:33:40Z","labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"9766aee3-d9f4-422e-a765-3836e1d0874c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-17T00:33:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9766aee3-d9f4-422e-a765-3836e1d0874c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-17T00:33:42Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.1.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-tv5m4","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-tv5m4","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-280c76ac-1743-minion-group-rr86","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-280c76ac-1743-minion-group-rr86"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:42Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:42Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:40Z"}],"hostIP":"10.128.0.5","podIP":"10.64.1.28","podIPs":[{"ip":"10.64.1.28"}],"startTime":"2021-09-17T00:33:40Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-17T00:33:41Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://61c212454729f159bb62d4e1154df3032251159795ba08b748ebccdd76ca154d","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-s2zvx","generateName":"daemon-set-","namespace":"daemonsets-420","uid":"f87eab70-6bbc-45b1-b60f-c339d9772414","resourceVersion":"11574","creationTimestamp":"2021-09-17T00:33:40Z","deletionTimestamp":"2021-09-17T00:34:13Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"9766aee3-d9f4-422e-a765-3836e1d0874c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-17T00:33:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9766aee3-d9f4-422e-a765-3836e1d0874c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-17T00:33:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.100\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-v6dhm","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-v6dhm","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-280c76ac-1743-minion-group-xp78","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-280c76ac-1743-minion-group-xp78"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:43Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:43Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:40Z"}],"hostIP":"10.128.0.3","podIP":"10.64.3.100","podIPs":[{"ip":"10.64.3.100"}],"startTime":"2021-09-17T00:33:40Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-17T00:33:43Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://f714b31822812bc96d810a8d9d0f04ccd1a0671b48c889ad43941759aa4f0ac0","started":true}],"qosClass":"BestEffort"}},{"metadata":{"name":"daemon-set-tq4p9","generateName":"daemon-set-","namespace":"daemonsets-420","uid":"831569f4-77a6-493b-8dda-9039353e59d9","resourceVersion":"11575","creationTimestamp":"2021-09-17T00:33:40Z","deletionTimestamp":"2021-09-17T00:34:13Z","deletionGracePeriodSeconds":30,"labels":{"controller-revision-hash":"5879b9c499","daemonset-name":"daemon-set","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"daemon-set","uid":"9766aee3-d9f4-422e-a765-3836e1d0874c","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2021-09-17T00:33:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:daemonset-name":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9766aee3-d9f4-422e-a765-3836e1d0874c\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{\"name\":\"app\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":9376,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:tolerations":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2021-09-17T00:33:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.0.51\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"kube-api-access-mzsj6","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"app","image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","ports":[{"containerPort":9376,"protocol":"TCP"}],"resources":{},"volumeMounts":[{"name":"kube-api-access-mzsj6","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"kt2-280c76ac-1743-minion-group-8xgx","securityContext":{},"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchFields":[{"key":"metadata.name","operator":"In","values":["kt2-280c76ac-1743-minion-group-8xgx"]}]}]}}},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute"},{"key":"node.kubernetes.io/disk-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/memory-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/pid-pressure","operator":"Exists","effect":"NoSchedule"},{"key":"node.kubernetes.io/unschedulable","operator":"Exists","effect":"NoSchedule"}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:40Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:41Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:41Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2021-09-17T00:33:40Z"}],"hostIP":"10.128.0.4","podIP":"10.64.0.51","podIPs":[{"ip":"10.64.0.51"}],"startTime":"2021-09-17T00:33:40Z","containerStatuses":[{"name":"app","state":{"running":{"startedAt":"2021-09-17T00:33:41Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2","imageID":"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3","containerID":"containerd://01bf9a7472b2c2e4f8432addfb8372430742565004c8deee39a624d7490a593b","started":true}],"qosClass":"BestEffort"}}]}

[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:33:43.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-420" for this suite.
•{"msg":"PASSED [sig-apps] Daemon set [Serial] should list and delete a collection of DaemonSets [Conformance]","total":346,"completed":120,"skipped":2333,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:11.127 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":346,"completed":121,"skipped":2379,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] Discovery 
  should validate PreferredVersion for each APIGroup [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Discovery
... skipping 104 lines ...
Sep 17 00:33:55.557: INFO: Versions found [{metrics.k8s.io/v1beta1 v1beta1}]
Sep 17 00:33:55.557: INFO: metrics.k8s.io/v1beta1 matches metrics.k8s.io/v1beta1
[AfterEach] [sig-api-machinery] Discovery
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:33:55.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "discovery-144" for this suite.
•{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":122,"skipped":2387,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-84a71563-0060-4db0-abc2-995897cf2534
STEP: Creating a pod to test consume secrets
Sep 17 00:33:55.685: INFO: Waiting up to 5m0s for pod "pod-secrets-d82f609d-b465-4eef-beac-ef0a8d61d651" in namespace "secrets-7828" to be "Succeeded or Failed"
Sep 17 00:33:55.692: INFO: Pod "pod-secrets-d82f609d-b465-4eef-beac-ef0a8d61d651": Phase="Pending", Reason="", readiness=false. Elapsed: 6.886696ms
Sep 17 00:33:57.696: INFO: Pod "pod-secrets-d82f609d-b465-4eef-beac-ef0a8d61d651": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011131336s
STEP: Saw pod success
Sep 17 00:33:57.696: INFO: Pod "pod-secrets-d82f609d-b465-4eef-beac-ef0a8d61d651" satisfied condition "Succeeded or Failed"
Sep 17 00:33:57.700: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-secrets-d82f609d-b465-4eef-beac-ef0a8d61d651 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 00:33:57.757: INFO: Waiting for pod pod-secrets-d82f609d-b465-4eef-beac-ef0a8d61d651 to disappear
Sep 17 00:33:57.761: INFO: Pod pod-secrets-d82f609d-b465-4eef-beac-ef0a8d61d651 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:33:57.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7828" for this suite.
STEP: Destroying namespace "secret-namespace-296" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":123,"skipped":2388,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 36 lines ...
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:33:59.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8551" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":124,"skipped":2394,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 7 lines ...
[It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:33:59.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-3381" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":125,"skipped":2419,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 00:33:59.485: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-858090a8-938e-414f-be3d-270b187f04a5" in namespace "security-context-test-9307" to be "Succeeded or Failed"
Sep 17 00:33:59.490: INFO: Pod "busybox-privileged-false-858090a8-938e-414f-be3d-270b187f04a5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.287841ms
Sep 17 00:34:01.497: INFO: Pod "busybox-privileged-false-858090a8-938e-414f-be3d-270b187f04a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011089306s
Sep 17 00:34:01.497: INFO: Pod "busybox-privileged-false-858090a8-938e-414f-be3d-270b187f04a5" satisfied condition "Succeeded or Failed"
Sep 17 00:34:01.507: INFO: Got logs for pod "busybox-privileged-false-858090a8-938e-414f-be3d-270b187f04a5": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:34:01.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9307" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":126,"skipped":2443,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-34731353-26fb-45f1-8bb8-db41deae4e6f
STEP: Creating a pod to test consume secrets
Sep 17 00:34:01.632: INFO: Waiting up to 5m0s for pod "pod-secrets-81473688-88e0-484d-ba59-bde6b10e85a1" in namespace "secrets-3243" to be "Succeeded or Failed"
Sep 17 00:34:01.686: INFO: Pod "pod-secrets-81473688-88e0-484d-ba59-bde6b10e85a1": Phase="Pending", Reason="", readiness=false. Elapsed: 53.733339ms
Sep 17 00:34:03.691: INFO: Pod "pod-secrets-81473688-88e0-484d-ba59-bde6b10e85a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.058734431s
STEP: Saw pod success
Sep 17 00:34:03.691: INFO: Pod "pod-secrets-81473688-88e0-484d-ba59-bde6b10e85a1" satisfied condition "Succeeded or Failed"
Sep 17 00:34:03.694: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-secrets-81473688-88e0-484d-ba59-bde6b10e85a1 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 00:34:03.732: INFO: Waiting for pod pod-secrets-81473688-88e0-484d-ba59-bde6b10e85a1 to disappear
Sep 17 00:34:03.742: INFO: Pod pod-secrets-81473688-88e0-484d-ba59-bde6b10e85a1 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:34:03.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3243" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":127,"skipped":2474,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 40 lines ...
• [SLOW TEST:10.356 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":346,"completed":128,"skipped":2491,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 71 lines ...
• [SLOW TEST:32.315 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":129,"skipped":2507,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 00:34:46.436: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 17 00:34:46.510: INFO: Waiting up to 5m0s for pod "pod-39a1333d-7292-4536-bb21-24c41ee267df" in namespace "emptydir-9220" to be "Succeeded or Failed"
Sep 17 00:34:46.533: INFO: Pod "pod-39a1333d-7292-4536-bb21-24c41ee267df": Phase="Pending", Reason="", readiness=false. Elapsed: 22.857944ms
Sep 17 00:34:48.537: INFO: Pod "pod-39a1333d-7292-4536-bb21-24c41ee267df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.027096086s
STEP: Saw pod success
Sep 17 00:34:48.537: INFO: Pod "pod-39a1333d-7292-4536-bb21-24c41ee267df" satisfied condition "Succeeded or Failed"
Sep 17 00:34:48.541: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-39a1333d-7292-4536-bb21-24c41ee267df container test-container: <nil>
STEP: delete the pod
Sep 17 00:34:48.561: INFO: Waiting for pod pod-39a1333d-7292-4536-bb21-24c41ee267df to disappear
Sep 17 00:34:48.565: INFO: Pod pod-39a1333d-7292-4536-bb21-24c41ee267df no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:34:48.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9220" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":130,"skipped":2510,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-a2b98522-9cd0-412e-8d54-ac938f02f239
STEP: Creating a pod to test consume configMaps
Sep 17 00:34:48.630: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e34e3061-327a-4047-9fc8-5dc3991627ca" in namespace "projected-1270" to be "Succeeded or Failed"
Sep 17 00:34:48.636: INFO: Pod "pod-projected-configmaps-e34e3061-327a-4047-9fc8-5dc3991627ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.304359ms
Sep 17 00:34:50.643: INFO: Pod "pod-projected-configmaps-e34e3061-327a-4047-9fc8-5dc3991627ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013554121s
STEP: Saw pod success
Sep 17 00:34:50.643: INFO: Pod "pod-projected-configmaps-e34e3061-327a-4047-9fc8-5dc3991627ca" satisfied condition "Succeeded or Failed"
Sep 17 00:34:50.649: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-configmaps-e34e3061-327a-4047-9fc8-5dc3991627ca container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:34:50.677: INFO: Waiting for pod pod-projected-configmaps-e34e3061-327a-4047-9fc8-5dc3991627ca to disappear
Sep 17 00:34:50.682: INFO: Pod pod-projected-configmaps-e34e3061-327a-4047-9fc8-5dc3991627ca no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:34:50.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1270" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":131,"skipped":2583,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-c1719780-12b8-4530-826b-dd3a108eac60
STEP: Creating a pod to test consume configMaps
Sep 17 00:34:50.762: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c5ffcde-b334-43b0-aaff-a6adce54bd46" in namespace "projected-7712" to be "Succeeded or Failed"
Sep 17 00:34:50.768: INFO: Pod "pod-projected-configmaps-8c5ffcde-b334-43b0-aaff-a6adce54bd46": Phase="Pending", Reason="", readiness=false. Elapsed: 5.462268ms
Sep 17 00:34:52.772: INFO: Pod "pod-projected-configmaps-8c5ffcde-b334-43b0-aaff-a6adce54bd46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010192709s
STEP: Saw pod success
Sep 17 00:34:52.772: INFO: Pod "pod-projected-configmaps-8c5ffcde-b334-43b0-aaff-a6adce54bd46" satisfied condition "Succeeded or Failed"
Sep 17 00:34:52.775: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-configmaps-8c5ffcde-b334-43b0-aaff-a6adce54bd46 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:34:52.810: INFO: Waiting for pod pod-projected-configmaps-8c5ffcde-b334-43b0-aaff-a6adce54bd46 to disappear
Sep 17 00:34:52.815: INFO: Pod pod-projected-configmaps-8c5ffcde-b334-43b0-aaff-a6adce54bd46 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:34:52.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7712" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":132,"skipped":2597,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 19 lines ...
• [SLOW TEST:28.235 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":133,"skipped":2657,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 2 lines ...
Sep 17 00:35:21.061: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 00:35:21.143: INFO: created pod
Sep 17 00:35:21.143: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6814" to be "Succeeded or Failed"
Sep 17 00:35:21.156: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 13.047327ms
Sep 17 00:35:23.161: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018418211s
STEP: Saw pod success
Sep 17 00:35:23.161: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
Sep 17 00:35:53.162: INFO: polling logs
Sep 17 00:35:53.172: INFO: Pod logs: 
2021/09/17 00:35:22 OK: Got token
2021/09/17 00:35:22 validating with in-cluster discovery
2021/09/17 00:35:22 OK: got issuer https://kubernetes.default.svc.cluster.local
2021/09/17 00:35:22 Full, not-validated claims: 
... skipping 13 lines ...
• [SLOW TEST:32.130 seconds]
[sig-auth] ServiceAccounts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":346,"completed":134,"skipped":2777,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 126 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should scale a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":346,"completed":135,"skipped":2782,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 16 lines ...
• [SLOW TEST:6.660 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":136,"skipped":2782,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-network] Services 
  should test the lifecycle of an Endpoint [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 19 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:36:08.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8693" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":137,"skipped":2787,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 00:36:08.700: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 00:36:10.787: INFO: Deleting pod "var-expansion-b2b89c80-14c0-449f-b863-ee518c31b4a7" in namespace "var-expansion-9224"
Sep 17 00:36:10.793: INFO: Wait up to 5m0s for pod "var-expansion-b2b89c80-14c0-449f-b863-ee518c31b4a7" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:36:12.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-9224" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":138,"skipped":2789,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 110 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":346,"completed":139,"skipped":2789,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] KubeletManagedEtcHosts
... skipping 54 lines ...
• [SLOW TEST:7.832 seconds]
[sig-node] KubeletManagedEtcHosts
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":140,"skipped":2796,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 00:37:32.691: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 00:37:32.760: INFO: Waiting up to 5m0s for pod "downward-api-378a7ab2-97d1-48d4-965c-ac09c800ce99" in namespace "downward-api-7580" to be "Succeeded or Failed"
Sep 17 00:37:32.765: INFO: Pod "downward-api-378a7ab2-97d1-48d4-965c-ac09c800ce99": Phase="Pending", Reason="", readiness=false. Elapsed: 5.234843ms
Sep 17 00:37:34.770: INFO: Pod "downward-api-378a7ab2-97d1-48d4-965c-ac09c800ce99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009830713s
STEP: Saw pod success
Sep 17 00:37:34.770: INFO: Pod "downward-api-378a7ab2-97d1-48d4-965c-ac09c800ce99" satisfied condition "Succeeded or Failed"
Sep 17 00:37:34.772: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod downward-api-378a7ab2-97d1-48d4-965c-ac09c800ce99 container dapi-container: <nil>
STEP: delete the pod
Sep 17 00:37:34.823: INFO: Waiting for pod downward-api-378a7ab2-97d1-48d4-965c-ac09c800ce99 to disappear
Sep 17 00:37:34.835: INFO: Pod downward-api-378a7ab2-97d1-48d4-965c-ac09c800ce99 no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:37:34.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7580" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":141,"skipped":2811,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 8 lines ...
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:37:39.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7100" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":346,"completed":142,"skipped":2811,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 49 lines ...
• [SLOW TEST:10.397 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":143,"skipped":2819,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":144,"skipped":2928,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 24 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:38:02.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3391" for this suite.
STEP: Destroying namespace "webhook-3391-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":145,"skipped":2961,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Sep 17 00:38:04.303: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:04.310: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:04.347: INFO: Unable to read jessie_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:04.355: INFO: Unable to read jessie_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:04.363: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:04.370: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:04.458: INFO: Lookups using dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24 failed for: [wheezy_udp@dns-test-service.dns-4474.svc.cluster.local wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local jessie_udp@dns-test-service.dns-4474.svc.cluster.local jessie_tcp@dns-test-service.dns-4474.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4474.svc.cluster.local]

Sep 17 00:38:09.467: INFO: Unable to read wheezy_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:09.475: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:09.570: INFO: Unable to read jessie_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:09.577: INFO: Unable to read jessie_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:09.765: INFO: Lookups using dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24 failed for: [wheezy_udp@dns-test-service.dns-4474.svc.cluster.local wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local jessie_udp@dns-test-service.dns-4474.svc.cluster.local jessie_tcp@dns-test-service.dns-4474.svc.cluster.local]

Sep 17 00:38:14.465: INFO: Unable to read wheezy_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:14.470: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:14.538: INFO: Unable to read jessie_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:14.639: INFO: Unable to read jessie_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:14.677: INFO: Lookups using dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24 failed for: [wheezy_udp@dns-test-service.dns-4474.svc.cluster.local wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local jessie_udp@dns-test-service.dns-4474.svc.cluster.local jessie_tcp@dns-test-service.dns-4474.svc.cluster.local]

Sep 17 00:38:19.465: INFO: Unable to read wheezy_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:19.474: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:19.557: INFO: Unable to read jessie_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:19.562: INFO: Unable to read jessie_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:19.652: INFO: Lookups using dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24 failed for: [wheezy_udp@dns-test-service.dns-4474.svc.cluster.local wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local jessie_udp@dns-test-service.dns-4474.svc.cluster.local jessie_tcp@dns-test-service.dns-4474.svc.cluster.local]

Sep 17 00:38:24.466: INFO: Unable to read wheezy_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:24.473: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:24.546: INFO: Unable to read jessie_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:24.565: INFO: Unable to read jessie_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:24.646: INFO: Lookups using dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24 failed for: [wheezy_udp@dns-test-service.dns-4474.svc.cluster.local wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local jessie_udp@dns-test-service.dns-4474.svc.cluster.local jessie_tcp@dns-test-service.dns-4474.svc.cluster.local]

Sep 17 00:38:29.465: INFO: Unable to read wheezy_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:29.471: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:29.513: INFO: Unable to read jessie_udp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:29.538: INFO: Unable to read jessie_tcp@dns-test-service.dns-4474.svc.cluster.local from pod dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24: the server could not find the requested resource (get pods dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24)
Sep 17 00:38:29.638: INFO: Lookups using dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24 failed for: [wheezy_udp@dns-test-service.dns-4474.svc.cluster.local wheezy_tcp@dns-test-service.dns-4474.svc.cluster.local jessie_udp@dns-test-service.dns-4474.svc.cluster.local jessie_tcp@dns-test-service.dns-4474.svc.cluster.local]

Sep 17 00:38:34.639: INFO: DNS probes using dns-4474/dns-test-3b4a48e7-229b-467f-8ecf-3ef582689f24 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:32.745 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for services  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":346,"completed":146,"skipped":2975,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-node] Pods 
  should delete a collection of pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 17 lines ...
Sep 17 00:38:37.017: INFO: Pod quantity 3 is different from expected quantity 0
Sep 17 00:38:38.018: INFO: Pod quantity 3 is different from expected quantity 0
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:38:39.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3240" for this suite.
•{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":346,"completed":147,"skipped":2979,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 00:38:39.069: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:38:39.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7626" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":346,"completed":148,"skipped":3047,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-instrumentation] Events 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events
... skipping 14 lines ...
STEP: check that the list of events matches the requested quantity
Sep 17 00:38:39.943: INFO: requesting list of events to confirm quantity
[AfterEach] [sig-instrumentation] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:38:39.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7723" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":149,"skipped":3057,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 14 lines ...
STEP: Creating configMap with name cm-test-opt-create-4a6a61c1-4831-47d8-b253-24bf20644fb5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:38:44.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5176" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":150,"skipped":3065,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 00:38:44.263: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 00:38:46.361: INFO: Deleting pod "var-expansion-956a3483-0dd5-4beb-ab96-cd36fd23e851" in namespace "var-expansion-836"
Sep 17 00:38:46.370: INFO: Wait up to 5m0s for pod "var-expansion-956a3483-0dd5-4beb-ab96-cd36fd23e851" to be fully deleted
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:38:48.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-836" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":346,"completed":151,"skipped":3082,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 29 lines ...
• [SLOW TEST:10.101 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":346,"completed":152,"skipped":3084,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-map-4b3fa6af-9fef-47fd-b67b-c685860b98ac
STEP: Creating a pod to test consume secrets
Sep 17 00:38:58.544: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e3e8e863-d598-46bc-81aa-c51111266616" in namespace "projected-3509" to be "Succeeded or Failed"
Sep 17 00:38:58.549: INFO: Pod "pod-projected-secrets-e3e8e863-d598-46bc-81aa-c51111266616": Phase="Pending", Reason="", readiness=false. Elapsed: 4.594682ms
Sep 17 00:39:00.554: INFO: Pod "pod-projected-secrets-e3e8e863-d598-46bc-81aa-c51111266616": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010413366s
STEP: Saw pod success
Sep 17 00:39:00.555: INFO: Pod "pod-projected-secrets-e3e8e863-d598-46bc-81aa-c51111266616" satisfied condition "Succeeded or Failed"
Sep 17 00:39:00.557: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-secrets-e3e8e863-d598-46bc-81aa-c51111266616 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 00:39:00.579: INFO: Waiting for pod pod-projected-secrets-e3e8e863-d598-46bc-81aa-c51111266616 to disappear
Sep 17 00:39:00.583: INFO: Pod pod-projected-secrets-e3e8e863-d598-46bc-81aa-c51111266616 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:39:00.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3509" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":153,"skipped":3108,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-41551462-f1cd-445c-86b9-b1aaa68cec5b
STEP: Creating a pod to test consume secrets
Sep 17 00:39:00.664: INFO: Waiting up to 5m0s for pod "pod-secrets-1841bd65-2529-4f6c-a4f4-7e29e585a303" in namespace "secrets-4118" to be "Succeeded or Failed"
Sep 17 00:39:00.670: INFO: Pod "pod-secrets-1841bd65-2529-4f6c-a4f4-7e29e585a303": Phase="Pending", Reason="", readiness=false. Elapsed: 5.884109ms
Sep 17 00:39:02.675: INFO: Pod "pod-secrets-1841bd65-2529-4f6c-a4f4-7e29e585a303": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010866386s
STEP: Saw pod success
Sep 17 00:39:02.675: INFO: Pod "pod-secrets-1841bd65-2529-4f6c-a4f4-7e29e585a303" satisfied condition "Succeeded or Failed"
Sep 17 00:39:02.679: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-secrets-1841bd65-2529-4f6c-a4f4-7e29e585a303 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 00:39:02.700: INFO: Waiting for pod pod-secrets-1841bd65-2529-4f6c-a4f4-7e29e585a303 to disappear
Sep 17 00:39:02.705: INFO: Pod pod-secrets-1841bd65-2529-4f6c-a4f4-7e29e585a303 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:39:02.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4118" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":154,"skipped":3121,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should run through a ConfigMap lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
... skipping 11 lines ...
STEP: deleting the ConfigMap by collection with a label selector
STEP: listing all ConfigMaps in test namespace
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:39:02.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2389" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":346,"completed":155,"skipped":3187,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:39:02.858: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05ed9685-84c0-4b6e-b7d0-249700e6515d" in namespace "downward-api-7600" to be "Succeeded or Failed"
Sep 17 00:39:02.863: INFO: Pod "downwardapi-volume-05ed9685-84c0-4b6e-b7d0-249700e6515d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.777722ms
Sep 17 00:39:04.868: INFO: Pod "downwardapi-volume-05ed9685-84c0-4b6e-b7d0-249700e6515d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009586166s
STEP: Saw pod success
Sep 17 00:39:04.868: INFO: Pod "downwardapi-volume-05ed9685-84c0-4b6e-b7d0-249700e6515d" satisfied condition "Succeeded or Failed"
Sep 17 00:39:04.870: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-05ed9685-84c0-4b6e-b7d0-249700e6515d container client-container: <nil>
STEP: delete the pod
Sep 17 00:39:04.887: INFO: Waiting for pod downwardapi-volume-05ed9685-84c0-4b6e-b7d0-249700e6515d to disappear
Sep 17 00:39:04.891: INFO: Pod downwardapi-volume-05ed9685-84c0-4b6e-b7d0-249700e6515d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:39:04.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7600" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":156,"skipped":3206,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 12 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3463
STEP: Waiting until pod test-pod will start running in namespace statefulset-3463
STEP: Creating statefulset with conflicting port in namespace statefulset-3463
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3463
Sep 17 00:39:07.003: INFO: Observed stateful pod in namespace: statefulset-3463, name: ss-0, uid: b3d1d315-4c9a-4776-8e98-994c5c4d28e6, status phase: Pending. Waiting for statefulset controller to delete.
Sep 17 00:39:07.032: INFO: Observed stateful pod in namespace: statefulset-3463, name: ss-0, uid: b3d1d315-4c9a-4776-8e98-994c5c4d28e6, status phase: Failed. Waiting for statefulset controller to delete.
Sep 17 00:39:07.043: INFO: Observed stateful pod in namespace: statefulset-3463, name: ss-0, uid: b3d1d315-4c9a-4776-8e98-994c5c4d28e6, status phase: Failed. Waiting for statefulset controller to delete.
Sep 17 00:39:07.053: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3463
STEP: Removing pod with conflicting port in namespace statefulset-3463
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3463 and will be in running state
[AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118
Sep 17 00:39:09.121: INFO: Deleting all statefulset in ns statefulset-3463
... skipping 10 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    Should recreate evicted statefulset [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":157,"skipped":3241,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 00:39:19.231: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on tmpfs
Sep 17 00:39:19.302: INFO: Waiting up to 5m0s for pod "pod-b2f2ef16-9f67-44b2-9c1b-32d54eec2752" in namespace "emptydir-7960" to be "Succeeded or Failed"
Sep 17 00:39:19.309: INFO: Pod "pod-b2f2ef16-9f67-44b2-9c1b-32d54eec2752": Phase="Pending", Reason="", readiness=false. Elapsed: 7.699725ms
Sep 17 00:39:21.314: INFO: Pod "pod-b2f2ef16-9f67-44b2-9c1b-32d54eec2752": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012774893s
STEP: Saw pod success
Sep 17 00:39:21.315: INFO: Pod "pod-b2f2ef16-9f67-44b2-9c1b-32d54eec2752" satisfied condition "Succeeded or Failed"
Sep 17 00:39:21.318: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-b2f2ef16-9f67-44b2-9c1b-32d54eec2752 container test-container: <nil>
STEP: delete the pod
Sep 17 00:39:21.347: INFO: Waiting for pod pod-b2f2ef16-9f67-44b2-9c1b-32d54eec2752 to disappear
Sep 17 00:39:21.351: INFO: Pod pod-b2f2ef16-9f67-44b2-9c1b-32d54eec2752 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:39:21.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7960" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":158,"skipped":3270,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount projected service account token [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 2 lines ...
Sep 17 00:39:21.360: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount projected service account token [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test service account token: 
Sep 17 00:39:21.418: INFO: Waiting up to 5m0s for pod "test-pod-62be425f-54ad-4c25-952b-c22e7f24ad9c" in namespace "svcaccounts-771" to be "Succeeded or Failed"
Sep 17 00:39:21.425: INFO: Pod "test-pod-62be425f-54ad-4c25-952b-c22e7f24ad9c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.69277ms
Sep 17 00:39:23.429: INFO: Pod "test-pod-62be425f-54ad-4c25-952b-c22e7f24ad9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010993281s
STEP: Saw pod success
Sep 17 00:39:23.429: INFO: Pod "test-pod-62be425f-54ad-4c25-952b-c22e7f24ad9c" satisfied condition "Succeeded or Failed"
Sep 17 00:39:23.432: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod test-pod-62be425f-54ad-4c25-952b-c22e7f24ad9c container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:39:23.449: INFO: Waiting for pod test-pod-62be425f-54ad-4c25-952b-c22e7f24ad9c to disappear
Sep 17 00:39:23.454: INFO: Pod test-pod-62be425f-54ad-4c25-952b-c22e7f24ad9c no longer exists
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:39:23.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-771" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":346,"completed":159,"skipped":3279,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
• [SLOW TEST:22.156 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":160,"skipped":3421,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 31 lines ...
• [SLOW TEST:8.280 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":346,"completed":161,"skipped":3457,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 14 lines ...
STEP: Creating secret with name s-test-opt-create-1cf95a42-e648-4622-b203-8074e3002391
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:39:58.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7420" for this suite.
•{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":162,"skipped":3465,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:39:58.146: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1238b1bf-49dc-40ed-99ac-c036b6fbab92" in namespace "projected-4427" to be "Succeeded or Failed"
Sep 17 00:39:58.156: INFO: Pod "downwardapi-volume-1238b1bf-49dc-40ed-99ac-c036b6fbab92": Phase="Pending", Reason="", readiness=false. Elapsed: 10.089058ms
Sep 17 00:40:00.160: INFO: Pod "downwardapi-volume-1238b1bf-49dc-40ed-99ac-c036b6fbab92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013839632s
STEP: Saw pod success
Sep 17 00:40:00.160: INFO: Pod "downwardapi-volume-1238b1bf-49dc-40ed-99ac-c036b6fbab92" satisfied condition "Succeeded or Failed"
Sep 17 00:40:00.163: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-1238b1bf-49dc-40ed-99ac-c036b6fbab92 container client-container: <nil>
STEP: delete the pod
Sep 17 00:40:00.184: INFO: Waiting for pod downwardapi-volume-1238b1bf-49dc-40ed-99ac-c036b6fbab92 to disappear
Sep 17 00:40:00.188: INFO: Pod downwardapi-volume-1238b1bf-49dc-40ed-99ac-c036b6fbab92 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:40:00.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4427" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":163,"skipped":3468,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 00:40:00.284: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:40:03.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7264" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":346,"completed":164,"skipped":3481,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:40:03.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1cadd69-89c2-469d-9f47-aee36493a4dc" in namespace "projected-9692" to be "Succeeded or Failed"
Sep 17 00:40:03.985: INFO: Pod "downwardapi-volume-e1cadd69-89c2-469d-9f47-aee36493a4dc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.972162ms
Sep 17 00:40:05.991: INFO: Pod "downwardapi-volume-e1cadd69-89c2-469d-9f47-aee36493a4dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011906943s
STEP: Saw pod success
Sep 17 00:40:05.991: INFO: Pod "downwardapi-volume-e1cadd69-89c2-469d-9f47-aee36493a4dc" satisfied condition "Succeeded or Failed"
Sep 17 00:40:05.995: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-e1cadd69-89c2-469d-9f47-aee36493a4dc container client-container: <nil>
STEP: delete the pod
Sep 17 00:40:06.021: INFO: Waiting for pod downwardapi-volume-e1cadd69-89c2-469d-9f47-aee36493a4dc to disappear
Sep 17 00:40:06.026: INFO: Pod downwardapi-volume-e1cadd69-89c2-469d-9f47-aee36493a4dc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:40:06.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9692" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":165,"skipped":3526,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] LimitRange
... skipping 38 lines ...
• [SLOW TEST:7.319 seconds]
[sig-scheduling] LimitRange
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":346,"completed":166,"skipped":3526,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 17 00:40:13.356: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's command
Sep 17 00:40:13.430: INFO: Waiting up to 5m0s for pod "var-expansion-ae90023d-77f5-478b-b544-cb2805562a51" in namespace "var-expansion-1455" to be "Succeeded or Failed"
Sep 17 00:40:13.440: INFO: Pod "var-expansion-ae90023d-77f5-478b-b544-cb2805562a51": Phase="Pending", Reason="", readiness=false. Elapsed: 9.308003ms
Sep 17 00:40:15.446: INFO: Pod "var-expansion-ae90023d-77f5-478b-b544-cb2805562a51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016017731s
STEP: Saw pod success
Sep 17 00:40:15.446: INFO: Pod "var-expansion-ae90023d-77f5-478b-b544-cb2805562a51" satisfied condition "Succeeded or Failed"
Sep 17 00:40:15.454: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod var-expansion-ae90023d-77f5-478b-b544-cb2805562a51 container dapi-container: <nil>
STEP: delete the pod
Sep 17 00:40:15.496: INFO: Waiting for pod var-expansion-ae90023d-77f5-478b-b544-cb2805562a51 to disappear
Sep 17 00:40:15.505: INFO: Pod var-expansion-ae90023d-77f5-478b-b544-cb2805562a51 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:40:15.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1455" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":346,"completed":167,"skipped":3531,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSliceMirroring 
  should mirror a custom Endpoints resource through create update and delete [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSliceMirroring
... skipping 12 lines ...
STEP: mirroring deletion of a custom Endpoint
Sep 17 00:40:17.688: INFO: Waiting for 0 EndpointSlices to exist, got 1
[AfterEach] [sig-network] EndpointSliceMirroring
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:40:19.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslicemirroring-4220" for this suite.
•{"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":346,"completed":168,"skipped":3566,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-nodeport in namespace services-5264
I0917 00:40:19.835318   97243 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-5264, replica count: 3
I0917 00:40:22.886936   97243 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 17 00:40:22.897: INFO: Creating new exec pod
Sep 17 00:40:25.922: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-5264 exec execpod-affinitypvl4d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 17 00:40:28.094: INFO: rc: 1
Sep 17 00:40:28.095: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-5264 exec execpod-affinitypvl4d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-nodeport 80
nc: connect to affinity-nodeport port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 00:40:29.095: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-5264 exec execpod-affinitypvl4d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80'
Sep 17 00:40:29.254: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n"
Sep 17 00:40:29.254: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 17 00:40:29.254: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-5264 exec execpod-affinitypvl4d -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.102.38 80'
... skipping 38 lines ...
• [SLOW TEST:12.581 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":169,"skipped":3578,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 18 lines ...
• [SLOW TEST:6.662 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":170,"skipped":3592,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should replace jobs when ReplaceConcurrent [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 18 lines ...
• [SLOW TEST:82.087 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should replace jobs when ReplaceConcurrent [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":171,"skipped":3617,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-map-5b71fb20-d4fc-4406-83e0-0dba879d7e05
STEP: Creating a pod to test consume secrets
Sep 17 00:42:01.165: INFO: Waiting up to 5m0s for pod "pod-secrets-612b148b-06b1-41b0-b994-29a3baf89a8e" in namespace "secrets-9965" to be "Succeeded or Failed"
Sep 17 00:42:01.171: INFO: Pod "pod-secrets-612b148b-06b1-41b0-b994-29a3baf89a8e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.333736ms
I0917 00:42:01.763340    2918 boskos.go:86] Sending heartbeat to Boskos
Sep 17 00:42:03.176: INFO: Pod "pod-secrets-612b148b-06b1-41b0-b994-29a3baf89a8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010391727s
STEP: Saw pod success
Sep 17 00:42:03.176: INFO: Pod "pod-secrets-612b148b-06b1-41b0-b994-29a3baf89a8e" satisfied condition "Succeeded or Failed"
Sep 17 00:42:03.179: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-secrets-612b148b-06b1-41b0-b994-29a3baf89a8e container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 00:42:03.204: INFO: Waiting for pod pod-secrets-612b148b-06b1-41b0-b994-29a3baf89a8e to disappear
Sep 17 00:42:03.213: INFO: Pod pod-secrets-612b148b-06b1-41b0-b994-29a3baf89a8e no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:42:03.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9965" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":172,"skipped":3686,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 17 lines ...
• [SLOW TEST:16.792 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":346,"completed":173,"skipped":3701,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:42:20.102: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c9af243-b2b4-419a-ac3a-ced775149320" in namespace "projected-99" to be "Succeeded or Failed"
Sep 17 00:42:20.109: INFO: Pod "downwardapi-volume-0c9af243-b2b4-419a-ac3a-ced775149320": Phase="Pending", Reason="", readiness=false. Elapsed: 7.193277ms
Sep 17 00:42:22.118: INFO: Pod "downwardapi-volume-0c9af243-b2b4-419a-ac3a-ced775149320": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016133297s
STEP: Saw pod success
Sep 17 00:42:22.118: INFO: Pod "downwardapi-volume-0c9af243-b2b4-419a-ac3a-ced775149320" satisfied condition "Succeeded or Failed"
Sep 17 00:42:22.123: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-0c9af243-b2b4-419a-ac3a-ced775149320 container client-container: <nil>
STEP: delete the pod
Sep 17 00:42:22.182: INFO: Waiting for pod downwardapi-volume-0c9af243-b2b4-419a-ac3a-ced775149320 to disappear
Sep 17 00:42:22.197: INFO: Pod downwardapi-volume-0c9af243-b2b4-419a-ac3a-ced775149320 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:42:22.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-99" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":346,"completed":174,"skipped":3721,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:42:26.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-32" for this suite.
•{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":175,"skipped":3737,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:9.771 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":346,"completed":176,"skipped":3752,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 3 lines ...
[BeforeEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749
[It] should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating service endpoint-test2 in namespace services-4279
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4279 to expose endpoints map[]
Sep 17 00:42:36.640: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found
Sep 17 00:42:37.653: INFO: successfully validated that service endpoint-test2 in namespace services-4279 exposes endpoints map[]
STEP: Creating pod pod1 in namespace services-4279
Sep 17 00:42:37.674: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 00:42:39.678: INFO: The status of Pod pod1 is Running (Ready = true)
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4279 to expose endpoints map[pod1:[80]]
Sep 17 00:42:39.692: INFO: successfully validated that service endpoint-test2 in namespace services-4279 exposes endpoints map[pod1:[80]]
... skipping 20 lines ...
STEP: Deleting pod pod1 in namespace services-4279
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4279 to expose endpoints map[pod2:[80]]
Sep 17 00:42:47.736: INFO: successfully validated that service endpoint-test2 in namespace services-4279 exposes endpoints map[pod2:[80]]
STEP: Checking if the Service forwards traffic to pod2
Sep 17 00:42:48.737: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4279 exec execpodxxp5c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 17 00:42:48.938: INFO: rc: 1
Sep 17 00:42:48.938: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4279 exec execpodxxp5c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 endpoint-test2 80
nc: connect to endpoint-test2 port 80 (tcp) failed: Host is unreachable
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 00:42:49.938: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4279 exec execpodxxp5c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80'
Sep 17 00:42:50.143: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n"
Sep 17 00:42:50.143: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 17 00:42:50.143: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-4279 exec execpodxxp5c -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.112.246 80'
... skipping 12 lines ...
• [SLOW TEST:14.005 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":346,"completed":177,"skipped":3752,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:42:50.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-020177bb-e8a9-42f8-9e48-8fcf0ef9062b" in namespace "projected-4470" to be "Succeeded or Failed"
Sep 17 00:42:50.518: INFO: Pod "downwardapi-volume-020177bb-e8a9-42f8-9e48-8fcf0ef9062b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.660776ms
Sep 17 00:42:52.523: INFO: Pod "downwardapi-volume-020177bb-e8a9-42f8-9e48-8fcf0ef9062b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010477906s
STEP: Saw pod success
Sep 17 00:42:52.523: INFO: Pod "downwardapi-volume-020177bb-e8a9-42f8-9e48-8fcf0ef9062b" satisfied condition "Succeeded or Failed"
Sep 17 00:42:52.527: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-020177bb-e8a9-42f8-9e48-8fcf0ef9062b container client-container: <nil>
STEP: delete the pod
Sep 17 00:42:52.552: INFO: Waiting for pod downwardapi-volume-020177bb-e8a9-42f8-9e48-8fcf0ef9062b to disappear
Sep 17 00:42:52.561: INFO: Pod downwardapi-volume-020177bb-e8a9-42f8-9e48-8fcf0ef9062b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:42:52.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4470" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":178,"skipped":3773,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
• [SLOW TEST:6.331 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":179,"skipped":3786,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 18 lines ...
Sep 17 00:43:03.088: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:03.095: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:03.101: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:03.109: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:03.120: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:03.139: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:03.139: INFO: Lookups using dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local]

Sep 17 00:43:08.150: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.155: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.163: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.170: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.177: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.184: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.191: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.239: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:08.239: INFO: Lookups using dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local]

Sep 17 00:43:13.147: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.155: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.162: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.170: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.181: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.188: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.195: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.203: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:13.203: INFO: Lookups using dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local]

Sep 17 00:43:18.150: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.156: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.164: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.171: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.177: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.183: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.190: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.198: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:18.198: INFO: Lookups using dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local]

Sep 17 00:43:23.148: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.157: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.166: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.173: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.180: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.187: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.194: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.202: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:23.202: INFO: Lookups using dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local]

Sep 17 00:43:28.149: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.158: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.196: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.262: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.270: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.277: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.285: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.292: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local from pod dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f: the server could not find the requested resource (get pods dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f)
Sep 17 00:43:28.292: INFO: Lookups using dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9336.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9336.svc.cluster.local jessie_udp@dns-test-service-2.dns-9336.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9336.svc.cluster.local]

Sep 17 00:43:33.249: INFO: DNS probes using dns-9336/dns-test-5b8b6112-11fb-4961-bf13-eb8c7d64039f succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
... skipping 4 lines ...
• [SLOW TEST:34.537 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":346,"completed":180,"skipped":3804,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 00:43:33.441: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap configmap-38/configmap-test-53602019-e075-4da2-aa85-90b69b780181
STEP: Creating a pod to test consume configMaps
Sep 17 00:43:33.538: INFO: Waiting up to 5m0s for pod "pod-configmaps-0caa8b90-48c3-4add-b54e-03c6e55784dc" in namespace "configmap-38" to be "Succeeded or Failed"
Sep 17 00:43:33.549: INFO: Pod "pod-configmaps-0caa8b90-48c3-4add-b54e-03c6e55784dc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.945068ms
Sep 17 00:43:35.556: INFO: Pod "pod-configmaps-0caa8b90-48c3-4add-b54e-03c6e55784dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017106501s
STEP: Saw pod success
Sep 17 00:43:35.556: INFO: Pod "pod-configmaps-0caa8b90-48c3-4add-b54e-03c6e55784dc" satisfied condition "Succeeded or Failed"
Sep 17 00:43:35.559: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-0caa8b90-48c3-4add-b54e-03c6e55784dc container env-test: <nil>
STEP: delete the pod
Sep 17 00:43:35.588: INFO: Waiting for pod pod-configmaps-0caa8b90-48c3-4add-b54e-03c6e55784dc to disappear
Sep 17 00:43:35.592: INFO: Pod pod-configmaps-0caa8b90-48c3-4add-b54e-03c6e55784dc no longer exists
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:43:35.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-38" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":346,"completed":181,"skipped":3804,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 17 00:43:35.688: INFO: The status of Pod busybox-host-aliasesa6097148-4d8c-4e29-8422-a6277a7678ff is Pending, waiting for it to be Running (with Ready = true)
Sep 17 00:43:37.693: INFO: The status of Pod busybox-host-aliasesa6097148-4d8c-4e29-8422-a6277a7678ff is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:43:37.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8509" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":182,"skipped":3825,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-apps] DisruptionController 
  should create a PodDisruptionBudget [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 14 lines ...
STEP: Waiting for the pdb to be processed
STEP: Waiting for the pdb to be deleted
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:43:41.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-843" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":346,"completed":183,"skipped":3826,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 58 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform rolling updates and roll backs of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":184,"skipped":3828,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 45 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":185,"skipped":3837,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 39 lines ...
Sep 17 00:45:52.221: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=crd-publish-openapi-4061 explain e2e-test-crd-publish-openapi-8340-crds.spec'
Sep 17 00:45:52.459: INFO: stderr: ""
Sep 17 00:45:52.459: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-8340-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Sep 17 00:45:52.460: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=crd-publish-openapi-4061 explain e2e-test-crd-publish-openapi-8340-crds.spec.bars'
Sep 17 00:45:52.618: INFO: stderr: ""
Sep 17 00:45:52.619: INFO: stdout: "KIND:     e2e-test-crd-publish-openapi-8340-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Sep 17 00:45:52.619: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=crd-publish-openapi-4061 explain e2e-test-crd-publish-openapi-8340-crds.spec.bars2'
Sep 17 00:45:52.806: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:45:55.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4061" for this suite.

• [SLOW TEST:9.379 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":186,"skipped":3847,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-7f5a6656-1aa3-43f2-b333-0f3049ed6540
STEP: Creating a pod to test consume configMaps
Sep 17 00:45:55.788: INFO: Waiting up to 5m0s for pod "pod-configmaps-e984f6b5-b9bf-415b-ae69-fb1e28f36515" in namespace "configmap-1853" to be "Succeeded or Failed"
Sep 17 00:45:55.805: INFO: Pod "pod-configmaps-e984f6b5-b9bf-415b-ae69-fb1e28f36515": Phase="Pending", Reason="", readiness=false. Elapsed: 16.358989ms
Sep 17 00:45:57.808: INFO: Pod "pod-configmaps-e984f6b5-b9bf-415b-ae69-fb1e28f36515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020073395s
STEP: Saw pod success
Sep 17 00:45:57.808: INFO: Pod "pod-configmaps-e984f6b5-b9bf-415b-ae69-fb1e28f36515" satisfied condition "Succeeded or Failed"
Sep 17 00:45:57.811: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-e984f6b5-b9bf-415b-ae69-fb1e28f36515 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:45:57.869: INFO: Waiting for pod pod-configmaps-e984f6b5-b9bf-415b-ae69-fb1e28f36515 to disappear
Sep 17 00:45:57.873: INFO: Pod pod-configmaps-e984f6b5-b9bf-415b-ae69-fb1e28f36515 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:45:57.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1853" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":187,"skipped":3856,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:45:57.940: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5240fb1b-49d8-4904-9b1e-dcb54b807f1a" in namespace "projected-1980" to be "Succeeded or Failed"
Sep 17 00:45:57.946: INFO: Pod "downwardapi-volume-5240fb1b-49d8-4904-9b1e-dcb54b807f1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339933ms
Sep 17 00:45:59.955: INFO: Pod "downwardapi-volume-5240fb1b-49d8-4904-9b1e-dcb54b807f1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014860307s
STEP: Saw pod success
Sep 17 00:45:59.955: INFO: Pod "downwardapi-volume-5240fb1b-49d8-4904-9b1e-dcb54b807f1a" satisfied condition "Succeeded or Failed"
Sep 17 00:45:59.959: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-5240fb1b-49d8-4904-9b1e-dcb54b807f1a container client-container: <nil>
STEP: delete the pod
Sep 17 00:45:59.983: INFO: Waiting for pod downwardapi-volume-5240fb1b-49d8-4904-9b1e-dcb54b807f1a to disappear
Sep 17 00:45:59.988: INFO: Pod downwardapi-volume-5240fb1b-49d8-4904-9b1e-dcb54b807f1a no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:45:59.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1980" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":188,"skipped":3878,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 12 lines ...
Sep 17 00:46:00.059: INFO: The status of Pod pod-logs-websocket-a4c902a9-e588-4fab-826c-968508d7f856 is Pending, waiting for it to be Running (with Ready = true)
Sep 17 00:46:02.063: INFO: The status of Pod pod-logs-websocket-a4c902a9-e588-4fab-826c-968508d7f856 is Running (Ready = true)
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:46:02.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6947" for this suite.
•{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":346,"completed":189,"skipped":3885,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller externalname-service in namespace services-7138
I0917 00:46:02.221067   97243 runners.go:193] Created replication controller with name: externalname-service, namespace: services-7138, replica count: 2
Sep 17 00:46:05.272: INFO: Creating new exec pod
I0917 00:46:05.272708   97243 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 17 00:46:08.359: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7138 exec execpodlj7xd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 17 00:46:09.634: INFO: rc: 1
Sep 17 00:46:09.634: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7138 exec execpodlj7xd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 00:46:10.634: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7138 exec execpodlj7xd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 17 00:46:11.808: INFO: rc: 1
Sep 17 00:46:11.809: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7138 exec execpodlj7xd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 externalname-service 80
nc: connect to externalname-service port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 00:46:12.634: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7138 exec execpodlj7xd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
Sep 17 00:46:12.934: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Sep 17 00:46:12.934: INFO: stdout: ""
Sep 17 00:46:13.634: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-7138 exec execpodlj7xd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80'
... skipping 13 lines ...
• [SLOW TEST:12.051 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":190,"skipped":3921,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:46:14.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6290581b-cc63-41c4-b656-4d5aaa9452f6" in namespace "projected-3781" to be "Succeeded or Failed"
Sep 17 00:46:14.233: INFO: Pod "downwardapi-volume-6290581b-cc63-41c4-b656-4d5aaa9452f6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.850662ms
Sep 17 00:46:16.241: INFO: Pod "downwardapi-volume-6290581b-cc63-41c4-b656-4d5aaa9452f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018977235s
STEP: Saw pod success
Sep 17 00:46:16.241: INFO: Pod "downwardapi-volume-6290581b-cc63-41c4-b656-4d5aaa9452f6" satisfied condition "Succeeded or Failed"
Sep 17 00:46:16.247: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-6290581b-cc63-41c4-b656-4d5aaa9452f6 container client-container: <nil>
STEP: delete the pod
Sep 17 00:46:16.292: INFO: Waiting for pod downwardapi-volume-6290581b-cc63-41c4-b656-4d5aaa9452f6 to disappear
Sep 17 00:46:16.304: INFO: Pod downwardapi-volume-6290581b-cc63-41c4-b656-4d5aaa9452f6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:46:16.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3781" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":191,"skipped":3930,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 21 lines ...
• [SLOW TEST:52.333 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":192,"skipped":3985,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:47:08.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564" in namespace "downward-api-8491" to be "Succeeded or Failed"
Sep 17 00:47:08.717: INFO: Pod "downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564": Phase="Pending", Reason="", readiness=false. Elapsed: 5.41757ms
Sep 17 00:47:10.722: INFO: Pod "downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010221934s
Sep 17 00:47:12.728: INFO: Pod "downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015981157s
STEP: Saw pod success
Sep 17 00:47:12.728: INFO: Pod "downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564" satisfied condition "Succeeded or Failed"
Sep 17 00:47:12.732: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564 container client-container: <nil>
STEP: delete the pod
Sep 17 00:47:12.757: INFO: Waiting for pod downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564 to disappear
Sep 17 00:47:12.761: INFO: Pod downwardapi-volume-ab9c9884-f38d-4006-a9fc-1dfd81b01564 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:47:12.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8491" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":193,"skipped":4017,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-instrumentation] Events 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events
... skipping 11 lines ...
STEP: deleting the test event
STEP: listing all events in all namespaces
[AfterEach] [sig-instrumentation] Events
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:47:12.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-871" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":194,"skipped":4020,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-auth] Certificates API [Privileged:ClusterAdmin] 
  should support CSR API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:47:13.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "certificates-9317" for this suite.
•{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":346,"completed":195,"skipped":4022,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 20 lines ...
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-839 to expose endpoints map[pod1:[100] pod2:[101]]
Sep 17 00:47:18.329: INFO: successfully validated that service multi-endpoint-test in namespace services-839 exposes endpoints map[pod1:[100] pod2:[101]]
STEP: Checking if the Service forwards traffic to pods
Sep 17 00:47:18.329: INFO: Creating new exec pod
Sep 17 00:47:21.367: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-839 exec execpodjmm2t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Sep 17 00:47:22.589: INFO: rc: 1
Sep 17 00:47:22.589: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-839 exec execpodjmm2t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 multi-endpoint-test 80
nc: connect to multi-endpoint-test port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 00:47:23.590: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-839 exec execpodjmm2t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80'
Sep 17 00:47:24.773: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\n+ echo hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n"
Sep 17 00:47:24.773: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 17 00:47:24.773: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-839 exec execpodjmm2t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.204.138 80'
... skipping 21 lines ...
• [SLOW TEST:13.719 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":346,"completed":196,"skipped":4028,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 18 lines ...
• [SLOW TEST:6.713 seconds]
[sig-storage] Downward API volume
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update labels on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":197,"skipped":4033,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:47:34.529: INFO: Waiting up to 5m0s for pod "downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71" in namespace "downward-api-5768" to be "Succeeded or Failed"
Sep 17 00:47:34.549: INFO: Pod "downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71": Phase="Pending", Reason="", readiness=false. Elapsed: 20.184948ms
Sep 17 00:47:36.556: INFO: Pod "downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026817769s
Sep 17 00:47:38.561: INFO: Pod "downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032225123s
STEP: Saw pod success
Sep 17 00:47:38.561: INFO: Pod "downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71" satisfied condition "Succeeded or Failed"
Sep 17 00:47:38.567: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71 container client-container: <nil>
STEP: delete the pod
Sep 17 00:47:38.593: INFO: Waiting for pod downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71 to disappear
Sep 17 00:47:38.609: INFO: Pod downwardapi-volume-178cefbf-90a8-4558-b774-eae05c7c0d71 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:47:38.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5768" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":198,"skipped":4060,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-7acc8c9d-8abd-423a-8f76-f350bdd32c67
STEP: Creating a pod to test consume configMaps
Sep 17 00:47:38.716: INFO: Waiting up to 5m0s for pod "pod-configmaps-391409d2-ac06-4b68-827a-ba55d7b3af9b" in namespace "configmap-2213" to be "Succeeded or Failed"
Sep 17 00:47:38.723: INFO: Pod "pod-configmaps-391409d2-ac06-4b68-827a-ba55d7b3af9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.831614ms
Sep 17 00:47:40.728: INFO: Pod "pod-configmaps-391409d2-ac06-4b68-827a-ba55d7b3af9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011789188s
STEP: Saw pod success
Sep 17 00:47:40.728: INFO: Pod "pod-configmaps-391409d2-ac06-4b68-827a-ba55d7b3af9b" satisfied condition "Succeeded or Failed"
Sep 17 00:47:40.733: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-391409d2-ac06-4b68-827a-ba55d7b3af9b container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:47:40.756: INFO: Waiting for pod pod-configmaps-391409d2-ac06-4b68-827a-ba55d7b3af9b to disappear
Sep 17 00:47:40.761: INFO: Pod pod-configmaps-391409d2-ac06-4b68-827a-ba55d7b3af9b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:47:40.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2213" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":199,"skipped":4084,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-48a52f02-1355-402b-b52e-69ae067fd154
STEP: Creating a pod to test consume configMaps
Sep 17 00:47:40.871: INFO: Waiting up to 5m0s for pod "pod-configmaps-e3078dee-954b-40d0-a62f-8bc8ca7e454b" in namespace "configmap-749" to be "Succeeded or Failed"
Sep 17 00:47:40.890: INFO: Pod "pod-configmaps-e3078dee-954b-40d0-a62f-8bc8ca7e454b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.397285ms
Sep 17 00:47:42.895: INFO: Pod "pod-configmaps-e3078dee-954b-40d0-a62f-8bc8ca7e454b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023779445s
STEP: Saw pod success
Sep 17 00:47:42.895: INFO: Pod "pod-configmaps-e3078dee-954b-40d0-a62f-8bc8ca7e454b" satisfied condition "Succeeded or Failed"
Sep 17 00:47:42.899: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-e3078dee-954b-40d0-a62f-8bc8ca7e454b container configmap-volume-test: <nil>
STEP: delete the pod
Sep 17 00:47:42.925: INFO: Waiting for pod pod-configmaps-e3078dee-954b-40d0-a62f-8bc8ca7e454b to disappear
Sep 17 00:47:42.929: INFO: Pod pod-configmaps-e3078dee-954b-40d0-a62f-8bc8ca7e454b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:47:42.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-749" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":346,"completed":200,"skipped":4101,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 47 lines ...
• [SLOW TEST:10.750 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":201,"skipped":4137,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
Sep 17 00:47:53.821: INFO: The status of Pod busybox-readonly-fs9ede7b6f-e55c-47ef-a9e5-62a429a9a6ff is Pending, waiting for it to be Running (with Ready = true)
Sep 17 00:47:55.826: INFO: The status of Pod busybox-readonly-fs9ede7b6f-e55c-47ef-a9e5-62a429a9a6ff is Running (Ready = true)
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:47:55.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3112" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":202,"skipped":4163,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should list and delete a collection of ReplicaSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 21 lines ...
• [SLOW TEST:5.293 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should list and delete a collection of ReplicaSets [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":346,"completed":203,"skipped":4186,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods Extended Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods Extended
... skipping 10 lines ...
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [sig-node] Pods Extended
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:48:01.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3149" for this suite.
•{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":204,"skipped":4237,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 00:48:01.362: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 17 00:48:01.450: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:48:04.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3205" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":346,"completed":205,"skipped":4237,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-apps] ReplicaSet 
  Replace and Patch tests [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 24 lines ...
• [SLOW TEST:8.231 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replace and Patch tests [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":346,"completed":206,"skipped":4241,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Downward API volume
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 00:48:12.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb27dc36-acea-4db2-8f26-87a184b0b652" in namespace "downward-api-1126" to be "Succeeded or Failed"
Sep 17 00:48:12.435: INFO: Pod "downwardapi-volume-cb27dc36-acea-4db2-8f26-87a184b0b652": Phase="Pending", Reason="", readiness=false. Elapsed: 18.069954ms
Sep 17 00:48:14.442: INFO: Pod "downwardapi-volume-cb27dc36-acea-4db2-8f26-87a184b0b652": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.025324656s
STEP: Saw pod success
Sep 17 00:48:14.442: INFO: Pod "downwardapi-volume-cb27dc36-acea-4db2-8f26-87a184b0b652" satisfied condition "Succeeded or Failed"
Sep 17 00:48:14.448: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-cb27dc36-acea-4db2-8f26-87a184b0b652 container client-container: <nil>
STEP: delete the pod
Sep 17 00:48:14.494: INFO: Waiting for pod downwardapi-volume-cb27dc36-acea-4db2-8f26-87a184b0b652 to disappear
Sep 17 00:48:14.499: INFO: Pod downwardapi-volume-cb27dc36-acea-4db2-8f26-87a184b0b652 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:48:14.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1126" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":207,"skipped":4243,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-8ec832e1-6c72-4c92-8b9d-bf56cb50e48a
STEP: Creating a pod to test consume configMaps
Sep 17 00:48:14.587: INFO: Waiting up to 5m0s for pod "pod-configmaps-e522bcb2-48b5-407b-9620-4d1a777e6b76" in namespace "configmap-2870" to be "Succeeded or Failed"
Sep 17 00:48:14.596: INFO: Pod "pod-configmaps-e522bcb2-48b5-407b-9620-4d1a777e6b76": Phase="Pending", Reason="", readiness=false. Elapsed: 9.63529ms
Sep 17 00:48:16.602: INFO: Pod "pod-configmaps-e522bcb2-48b5-407b-9620-4d1a777e6b76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015483423s
STEP: Saw pod success
Sep 17 00:48:16.602: INFO: Pod "pod-configmaps-e522bcb2-48b5-407b-9620-4d1a777e6b76" satisfied condition "Succeeded or Failed"
Sep 17 00:48:16.607: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-e522bcb2-48b5-407b-9620-4d1a777e6b76 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 00:48:16.635: INFO: Waiting for pod pod-configmaps-e522bcb2-48b5-407b-9620-4d1a777e6b76 to disappear
Sep 17 00:48:16.640: INFO: Pod pod-configmaps-e522bcb2-48b5-407b-9620-4d1a777e6b76 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:48:16.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2870" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":208,"skipped":4252,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:8.889 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":346,"completed":209,"skipped":4340,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-apps] CronJob 
  should not schedule jobs when suspended [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 17 lines ...
• [SLOW TEST:300.179 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should not schedule jobs when suspended [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":210,"skipped":4342,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 10 lines ...
Sep 17 00:53:27.870: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Sep 17 00:53:27.969: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:53:27.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4881" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":211,"skipped":4360,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 5 lines ...
[It] should be immutable if `immutable` field is set [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:53:28.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9986" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":346,"completed":212,"skipped":4379,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-cli] Kubectl client Kubectl diff 
  should check if kubectl diff finds a difference for Deployments [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
Sep 17 00:53:28.720: INFO: stderr: ""
Sep 17 00:53:28.720: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:53:28.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4434" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":213,"skipped":4380,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 9 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:53:28.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2317" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":346,"completed":214,"skipped":4382,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 34 lines ...
• [SLOW TEST:7.230 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":346,"completed":215,"skipped":4389,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 17 00:53:38.292: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:53:38.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3025" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":216,"skipped":4391,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 17 00:53:38.328: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test substitution in container's args
Sep 17 00:53:38.421: INFO: Waiting up to 5m0s for pod "var-expansion-0adae432-7526-4755-b370-8c40dbc1a8a3" in namespace "var-expansion-7580" to be "Succeeded or Failed"
Sep 17 00:53:38.434: INFO: Pod "var-expansion-0adae432-7526-4755-b370-8c40dbc1a8a3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.335377ms
Sep 17 00:53:40.444: INFO: Pod "var-expansion-0adae432-7526-4755-b370-8c40dbc1a8a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.023254142s
STEP: Saw pod success
Sep 17 00:53:40.444: INFO: Pod "var-expansion-0adae432-7526-4755-b370-8c40dbc1a8a3" satisfied condition "Succeeded or Failed"
Sep 17 00:53:40.450: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod var-expansion-0adae432-7526-4755-b370-8c40dbc1a8a3 container dapi-container: <nil>
STEP: delete the pod
Sep 17 00:53:40.538: INFO: Waiting for pod var-expansion-0adae432-7526-4755-b370-8c40dbc1a8a3 to disappear
Sep 17 00:53:40.543: INFO: Pod var-expansion-0adae432-7526-4755-b370-8c40dbc1a8a3 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:53:40.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7580" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":346,"completed":217,"skipped":4431,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 00:53:40.557: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod
Sep 17 00:53:40.621: INFO: PodSpec: initContainers in spec.initContainers
Sep 17 00:54:25.912: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-0efd0366-0246-4a18-96e9-668f9a9501c8", GenerateName:"", Namespace:"init-container-5215", SelfLink:"", UID:"cf7cdbe2-8f60-4c66-878e-e9ddc64e2541", ResourceVersion:"17227", Generation:0, CreationTimestamp:time.Date(2021, time.September, 17, 0, 53, 40, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"621254454"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 17, 0, 53, 40, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ce0018), Subresource:""}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:time.Date(2021, time.September, 17, 0, 53, 42, 0, time.Local), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ce0048), Subresource:"status"}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-5lpvl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc0033aa000), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-5lpvl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-5lpvl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-5lpvl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00557a278), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kt2-280c76ac-1743-minion-group-8xgx", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003f86000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00557a2f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00557a320)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00557a328), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00557a32c), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc003392020), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 0, 53, 40, 0, time.Local), Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 0, 53, 40, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 0, 53, 40, 0, time.Local), Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2021, time.September, 17, 0, 53, 40, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.128.0.4", PodIP:"10.64.0.88", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.64.0.88"}}, StartTime:time.Date(2021, time.September, 17, 0, 53, 40, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003f860e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc003f86150)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:c318242786b139d18676b1c09a0ad7f15fc17f8f16a5b2e625cd0dc8c9703daf", ContainerID:"containerd://d1c45a24fab4abcaddc5f706ab913621f4359eed275e885a4926ef00305b2644", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0033aa140), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-2", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0033aa0e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.6", ImageID:"", ContainerID:"", Started:(*bool)(0xc00557a3af)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:54:25.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-5215" for this suite.

• [SLOW TEST:45.374 seconds]
[sig-node] InitContainer [NodeConformance]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":346,"completed":218,"skipped":4449,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:8.298 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":219,"skipped":4477,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 26 lines ...
• [SLOW TEST:88.087 seconds]
[sig-node] NoExecuteTaintManager Multiple Pods [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  evicts pods with minTolerationSeconds [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":220,"skipped":4493,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
• [SLOW TEST:146.727 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":221,"skipped":4516,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 16 lines ...
Sep 17 00:58:31.272: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Sep 17 00:58:31.406: INFO: Deleting pod test-dns-nameservers...
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:58:31.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8937" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":222,"skipped":4552,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-node] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 5 lines ...
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:58:33.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4578" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":346,"completed":223,"skipped":4565,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 00:58:33.699: INFO: Waiting up to 5m0s for pod "busybox-user-65534-9eaa22e1-252a-4df1-bccb-e25f2c3d7a6c" in namespace "security-context-test-6653" to be "Succeeded or Failed"
Sep 17 00:58:33.705: INFO: Pod "busybox-user-65534-9eaa22e1-252a-4df1-bccb-e25f2c3d7a6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.202514ms
Sep 17 00:58:35.714: INFO: Pod "busybox-user-65534-9eaa22e1-252a-4df1-bccb-e25f2c3d7a6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014832095s
Sep 17 00:58:35.714: INFO: Pod "busybox-user-65534-9eaa22e1-252a-4df1-bccb-e25f2c3d7a6c" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:58:35.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6653" for this suite.
•{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":224,"skipped":4575,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should verify changes to a daemon set status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 62 lines ...
• [SLOW TEST:5.347 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should verify changes to a daemon set status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should verify changes to a daemon set status [Conformance]","total":346,"completed":225,"skipped":4630,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
• [SLOW TEST:7.794 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":346,"completed":226,"skipped":4643,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 29 lines ...
• [SLOW TEST:8.313 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":227,"skipped":4653,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] EndpointSlice 
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
... skipping 19 lines ...
• [SLOW TEST:30.476 seconds]
[sig-network] EndpointSlice
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":346,"completed":228,"skipped":4677,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 51 lines ...
• [SLOW TEST:8.514 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":346,"completed":229,"skipped":4679,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 00:59:36.174: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 17 00:59:36.289: INFO: Waiting up to 5m0s for pod "pod-ae9c4b84-3e3b-4e77-bb47-f4eaac645740" in namespace "emptydir-3617" to be "Succeeded or Failed"
Sep 17 00:59:36.300: INFO: Pod "pod-ae9c4b84-3e3b-4e77-bb47-f4eaac645740": Phase="Pending", Reason="", readiness=false. Elapsed: 10.591388ms
Sep 17 00:59:38.306: INFO: Pod "pod-ae9c4b84-3e3b-4e77-bb47-f4eaac645740": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016223844s
STEP: Saw pod success
Sep 17 00:59:38.306: INFO: Pod "pod-ae9c4b84-3e3b-4e77-bb47-f4eaac645740" satisfied condition "Succeeded or Failed"
Sep 17 00:59:38.309: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-ae9c4b84-3e3b-4e77-bb47-f4eaac645740 container test-container: <nil>
STEP: delete the pod
Sep 17 00:59:38.337: INFO: Waiting for pod pod-ae9c4b84-3e3b-4e77-bb47-f4eaac645740 to disappear
Sep 17 00:59:38.343: INFO: Pod pod-ae9c4b84-3e3b-4e77-bb47-f4eaac645740 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 00:59:38.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3617" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":230,"skipped":4683,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 33 lines ...
• [SLOW TEST:8.200 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":231,"skipped":4706,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 46 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":232,"skipped":4706,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 21 lines ...
• [SLOW TEST:13.243 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":346,"completed":233,"skipped":4706,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 15 lines ...
• [SLOW TEST:60.097 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":346,"completed":234,"skipped":4716,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 01:02:11.495: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:02:17.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9211" for this suite.

• [SLOW TEST:6.105 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":346,"completed":235,"skipped":4716,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 16 lines ...
STEP: verifying the updated pod is in kubernetes
Sep 17 01:02:20.212: INFO: Pod update OK
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:02:20.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8699" for this suite.
•{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":236,"skipped":4737,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-configmap-l2z5
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 01:02:20.426: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-l2z5" in namespace "subpath-3956" to be "Succeeded or Failed"
Sep 17 01:02:20.431: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.121743ms
Sep 17 01:02:22.436: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 2.010804531s
Sep 17 01:02:24.445: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 4.019122018s
Sep 17 01:02:26.451: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 6.025084568s
Sep 17 01:02:28.455: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 8.029412217s
Sep 17 01:02:30.462: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 10.036061135s
Sep 17 01:02:32.469: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 12.043538036s
Sep 17 01:02:34.474: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 14.0483557s
Sep 17 01:02:36.480: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 16.054630496s
Sep 17 01:02:38.486: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 18.060582349s
Sep 17 01:02:40.498: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Running", Reason="", readiness=true. Elapsed: 20.072423134s
Sep 17 01:02:42.504: INFO: Pod "pod-subpath-test-configmap-l2z5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.078722412s
STEP: Saw pod success
Sep 17 01:02:42.504: INFO: Pod "pod-subpath-test-configmap-l2z5" satisfied condition "Succeeded or Failed"
Sep 17 01:02:42.512: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-subpath-test-configmap-l2z5 container test-container-subpath-configmap-l2z5: <nil>
STEP: delete the pod
Sep 17 01:02:42.548: INFO: Waiting for pod pod-subpath-test-configmap-l2z5 to disappear
Sep 17 01:02:42.554: INFO: Pod pod-subpath-test-configmap-l2z5 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-l2z5
Sep 17 01:02:42.554: INFO: Deleting pod "pod-subpath-test-configmap-l2z5" in namespace "subpath-3956"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":237,"skipped":4747,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] NoExecuteTaintManager Single Pod [Serial] 
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 27 lines ...
• [SLOW TEST:135.388 seconds]
[sig-node] NoExecuteTaintManager Single Pod [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  removing taint cancels eviction [Disruptive] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":346,"completed":238,"skipped":4777,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 2 lines ...
Sep 17 01:04:57.954: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test env composition
Sep 17 01:04:58.041: INFO: Waiting up to 5m0s for pod "var-expansion-f3dc8b51-0c4f-471e-a061-7885d3d40943" in namespace "var-expansion-4328" to be "Succeeded or Failed"
Sep 17 01:04:58.047: INFO: Pod "var-expansion-f3dc8b51-0c4f-471e-a061-7885d3d40943": Phase="Pending", Reason="", readiness=false. Elapsed: 5.982768ms
Sep 17 01:05:00.053: INFO: Pod "var-expansion-f3dc8b51-0c4f-471e-a061-7885d3d40943": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011620585s
STEP: Saw pod success
Sep 17 01:05:00.053: INFO: Pod "var-expansion-f3dc8b51-0c4f-471e-a061-7885d3d40943" satisfied condition "Succeeded or Failed"
Sep 17 01:05:00.056: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod var-expansion-f3dc8b51-0c4f-471e-a061-7885d3d40943 container dapi-container: <nil>
STEP: delete the pod
Sep 17 01:05:00.101: INFO: Waiting for pod var-expansion-f3dc8b51-0c4f-471e-a061-7885d3d40943 to disappear
Sep 17 01:05:00.105: INFO: Pod var-expansion-f3dc8b51-0c4f-471e-a061-7885d3d40943 no longer exists
[AfterEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:05:00.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4328" for this suite.
•{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":239,"skipped":4793,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] 
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 27 lines ...
• [SLOW TEST:69.015 seconds]
[sig-scheduling] SchedulerPreemption [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates lower priority pod preemption by critical pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":346,"completed":240,"skipped":4808,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 11 lines ...
Sep 17 01:06:09.269: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2264  c5c39eae-83ab-4b49-85dc-671948bff1f9 19616 0 2021-09-17 01:06:09 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-09-17 01:06:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Sep 17 01:06:09.270: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-2264  c5c39eae-83ab-4b49-85dc-671948bff1f9 19617 0 2021-09-17 01:06:09 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-09-17 01:06:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:06:09.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2264" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":346,"completed":241,"skipped":4808,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 10 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with one valid and two invalid sysctls
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:06:09.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-3581" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":242,"skipped":4813,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  Deployment should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 24 lines ...
Sep 17 01:06:11.752: INFO: Pod "test-new-deployment-5c557bc5bf-mq7b8" is not available:
&Pod{ObjectMeta:{test-new-deployment-5c557bc5bf-mq7b8 test-new-deployment-5c557bc5bf- deployment-8168  760d1c2d-3586-4d65-aeb3-159e4788216b 19657 0 2021-09-17 01:06:11 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5c557bc5bf] map[] [{apps/v1 ReplicaSet test-new-deployment-5c557bc5bf 8bdb73c0-4baa-48aa-bda0-4de72e71158d 0xc006e18400 0xc006e18401}] []  [{kube-controller-manager Update v1 2021-09-17 01:06:11 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8bdb73c0-4baa-48aa-bda0-4de72e71158d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:06:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rg4nx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rg4nx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-8xgx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:06:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:06:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:06:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-17 01:06:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:06:11.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8168" for this suite.
•{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":243,"skipped":4853,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] DNS
... skipping 26 lines ...
Sep 17 01:06:24.365: INFO: Unable to read jessie_udp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:24.376: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:24.387: INFO: Unable to read jessie_udp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:24.401: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:24.414: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:24.422: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:24.465: INFO: Lookups using dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9763 wheezy_tcp@dns-test-service.dns-9763 wheezy_udp@dns-test-service.dns-9763.svc wheezy_tcp@dns-test-service.dns-9763.svc wheezy_udp@_http._tcp.dns-test-service.dns-9763.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9763.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9763 jessie_tcp@dns-test-service.dns-9763 jessie_udp@dns-test-service.dns-9763.svc jessie_tcp@dns-test-service.dns-9763.svc jessie_udp@_http._tcp.dns-test-service.dns-9763.svc jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc]

Sep 17 01:06:29.477: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.485: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.493: INFO: Unable to read wheezy_udp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.501: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.510: INFO: Unable to read wheezy_udp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
... skipping 5 lines ...
Sep 17 01:06:29.601: INFO: Unable to read jessie_udp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.637: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.669: INFO: Unable to read jessie_udp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.687: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.713: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.742: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:29.844: INFO: Lookups using dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9763 wheezy_tcp@dns-test-service.dns-9763 wheezy_udp@dns-test-service.dns-9763.svc wheezy_tcp@dns-test-service.dns-9763.svc wheezy_udp@_http._tcp.dns-test-service.dns-9763.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9763.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9763 jessie_tcp@dns-test-service.dns-9763 jessie_udp@dns-test-service.dns-9763.svc jessie_tcp@dns-test-service.dns-9763.svc jessie_udp@_http._tcp.dns-test-service.dns-9763.svc jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc]

Sep 17 01:06:34.476: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.484: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.492: INFO: Unable to read wheezy_udp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.499: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.509: INFO: Unable to read wheezy_udp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
... skipping 5 lines ...
Sep 17 01:06:34.585: INFO: Unable to read jessie_udp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.598: INFO: Unable to read jessie_udp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.610: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.746: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:34.773: INFO: Lookups using dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9763 wheezy_tcp@dns-test-service.dns-9763 wheezy_udp@dns-test-service.dns-9763.svc wheezy_tcp@dns-test-service.dns-9763.svc wheezy_udp@_http._tcp.dns-test-service.dns-9763.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9763.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9763 jessie_tcp@dns-test-service.dns-9763 jessie_udp@dns-test-service.dns-9763.svc jessie_tcp@dns-test-service.dns-9763.svc jessie_udp@_http._tcp.dns-test-service.dns-9763.svc jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc]

Sep 17 01:06:39.476: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.494: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.504: INFO: Unable to read wheezy_udp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.512: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.519: INFO: Unable to read wheezy_udp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
... skipping 5 lines ...
Sep 17 01:06:39.609: INFO: Unable to read jessie_udp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.616: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763 from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.625: INFO: Unable to read jessie_udp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.633: INFO: Unable to read jessie_tcp@dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.655: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:39.687: INFO: Lookups using dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9763 wheezy_tcp@dns-test-service.dns-9763 wheezy_udp@dns-test-service.dns-9763.svc wheezy_tcp@dns-test-service.dns-9763.svc wheezy_udp@_http._tcp.dns-test-service.dns-9763.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9763.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9763 jessie_tcp@dns-test-service.dns-9763 jessie_udp@dns-test-service.dns-9763.svc jessie_tcp@dns-test-service.dns-9763.svc jessie_udp@_http._tcp.dns-test-service.dns-9763.svc jessie_tcp@_http._tcp.dns-test-service.dns-9763.svc]

Sep 17 01:06:44.557: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9763.svc from pod dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43: the server could not find the requested resource (get pods dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43)
Sep 17 01:06:44.706: INFO: Lookups using dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-9763.svc]

Sep 17 01:06:49.658: INFO: DNS probes using dns-9763/dns-test-47b09cca-003a-4cdf-af5b-81e9e6842c43 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 5 lines ...
• [SLOW TEST:38.014 seconds]
[sig-network] DNS
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":346,"completed":244,"skipped":4865,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 12 lines ...
I0917 01:06:49.938118   97243 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-3362, replica count: 3
I0917 01:06:52.989357   97243 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 2 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0917 01:06:55.990480   97243 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 17 01:06:56.003: INFO: Creating new exec pod
Sep 17 01:06:59.029: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-3362 exec execpod-affinitysq28t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
Sep 17 01:07:01.405: INFO: rc: 1
Sep 17 01:07:01.405: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-3362 exec execpod-affinitysq28t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80:
Command stdout:

stderr:
+ nc -v -t -w 2 affinity-nodeport-transition 80
+ echo hostName
nc: connect to affinity-nodeport-transition port 80 (tcp) timed out: Operation in progress
command terminated with exit code 1

error:
exit status 1
Retrying...
I0917 01:07:01.873204    2918 boskos.go:86] Sending heartbeat to Boskos
Sep 17 01:07:02.405: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-3362 exec execpod-affinitysq28t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80'
Sep 17 01:07:02.558: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n"
Sep 17 01:07:02.558: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
... skipping 77 lines ...
• [SLOW TEST:47.078 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":245,"skipped":4875,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 19 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:07:40.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1700" for this suite.
STEP: Destroying namespace "webhook-1700-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":346,"completed":246,"skipped":4875,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-apps] DisruptionController 
  should update/patch PodDisruptionBudget status [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 15 lines ...
STEP: Patching PodDisruptionBudget status
STEP: Waiting for the pdb to be processed
[AfterEach] [sig-apps] DisruptionController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:07:45.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "disruption-1450" for this suite.
•{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":247,"skipped":4875,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-node] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 31 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
    when starting a container that exits
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
      should run with the expected status [NodeConformance] [Conformance]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":248,"skipped":4880,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 01:08:08.700: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir volume type on node default medium
Sep 17 01:08:08.754: INFO: Waiting up to 5m0s for pod "pod-5fc35778-c0ff-4ec2-898e-b57ad43ec4cd" in namespace "emptydir-2513" to be "Succeeded or Failed"
Sep 17 01:08:08.767: INFO: Pod "pod-5fc35778-c0ff-4ec2-898e-b57ad43ec4cd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.133566ms
Sep 17 01:08:10.771: INFO: Pod "pod-5fc35778-c0ff-4ec2-898e-b57ad43ec4cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.016927008s
STEP: Saw pod success
Sep 17 01:08:10.771: INFO: Pod "pod-5fc35778-c0ff-4ec2-898e-b57ad43ec4cd" satisfied condition "Succeeded or Failed"
Sep 17 01:08:10.775: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-5fc35778-c0ff-4ec2-898e-b57ad43ec4cd container test-container: <nil>
STEP: delete the pod
Sep 17 01:08:10.819: INFO: Waiting for pod pod-5fc35778-c0ff-4ec2-898e-b57ad43ec4cd to disappear
Sep 17 01:08:10.822: INFO: Pod pod-5fc35778-c0ff-4ec2-898e-b57ad43ec4cd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:08:10.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2513" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":249,"skipped":4954,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events API
... skipping 20 lines ...
STEP: listing events in all namespaces
STEP: listing events in test namespace
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:08:10.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8675" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":346,"completed":250,"skipped":4989,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Aggregator
... skipping 28 lines ...
• [SLOW TEST:13.394 seconds]
[sig-api-machinery] Aggregator
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":346,"completed":251,"skipped":5039,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 01:08:24.408: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on tmpfs
Sep 17 01:08:24.480: INFO: Waiting up to 5m0s for pod "pod-78745a78-ce0d-4ec6-a8d1-dc8c0aea3ead" in namespace "emptydir-2579" to be "Succeeded or Failed"
Sep 17 01:08:24.487: INFO: Pod "pod-78745a78-ce0d-4ec6-a8d1-dc8c0aea3ead": Phase="Pending", Reason="", readiness=false. Elapsed: 7.264218ms
Sep 17 01:08:26.500: INFO: Pod "pod-78745a78-ce0d-4ec6-a8d1-dc8c0aea3ead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020088987s
STEP: Saw pod success
Sep 17 01:08:26.500: INFO: Pod "pod-78745a78-ce0d-4ec6-a8d1-dc8c0aea3ead" satisfied condition "Succeeded or Failed"
Sep 17 01:08:26.506: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-78745a78-ce0d-4ec6-a8d1-dc8c0aea3ead container test-container: <nil>
STEP: delete the pod
Sep 17 01:08:26.531: INFO: Waiting for pod pod-78745a78-ce0d-4ec6-a8d1-dc8c0aea3ead to disappear
Sep 17 01:08:26.535: INFO: Pod pod-78745a78-ce0d-4ec6-a8d1-dc8c0aea3ead no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:08:26.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2579" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":252,"skipped":5043,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 46 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":253,"skipped":5079,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-node] Security Context 
  should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 01:08:54.022: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename security-context
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
Sep 17 01:08:54.081: INFO: Waiting up to 5m0s for pod "security-context-a4021523-25c1-4bbe-9e78-afbea2597a02" in namespace "security-context-9144" to be "Succeeded or Failed"
Sep 17 01:08:54.086: INFO: Pod "security-context-a4021523-25c1-4bbe-9e78-afbea2597a02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719315ms
Sep 17 01:08:56.091: INFO: Pod "security-context-a4021523-25c1-4bbe-9e78-afbea2597a02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009809204s
STEP: Saw pod success
Sep 17 01:08:56.091: INFO: Pod "security-context-a4021523-25c1-4bbe-9e78-afbea2597a02" satisfied condition "Succeeded or Failed"
Sep 17 01:08:56.097: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod security-context-a4021523-25c1-4bbe-9e78-afbea2597a02 container test-container: <nil>
STEP: delete the pod
Sep 17 01:08:56.152: INFO: Waiting for pod security-context-a4021523-25c1-4bbe-9e78-afbea2597a02 to disappear
Sep 17 01:08:56.159: INFO: Pod security-context-a4021523-25c1-4bbe-9e78-afbea2597a02 no longer exists
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:08:56.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-9144" for this suite.
•{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":346,"completed":254,"skipped":5079,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 16 lines ...
[It] should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
Sep 17 01:09:02.221: INFO: Waiting for webhook configuration to be ready...
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:09:14.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5904" for this suite.
STEP: Destroying namespace "webhook-5904-markers" for this suite.
... skipping 3 lines ...
• [SLOW TEST:18.494 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":346,"completed":255,"skipped":5086,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 4 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward API volume plugin
Sep 17 01:09:14.763: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c94eb185-9375-4c3f-a5ab-a50580576f0e" in namespace "projected-7760" to be "Succeeded or Failed"
Sep 17 01:09:14.770: INFO: Pod "downwardapi-volume-c94eb185-9375-4c3f-a5ab-a50580576f0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.481177ms
Sep 17 01:09:16.774: INFO: Pod "downwardapi-volume-c94eb185-9375-4c3f-a5ab-a50580576f0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010687632s
STEP: Saw pod success
Sep 17 01:09:16.774: INFO: Pod "downwardapi-volume-c94eb185-9375-4c3f-a5ab-a50580576f0e" satisfied condition "Succeeded or Failed"
Sep 17 01:09:16.783: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod downwardapi-volume-c94eb185-9375-4c3f-a5ab-a50580576f0e container client-container: <nil>
STEP: delete the pod
Sep 17 01:09:16.816: INFO: Waiting for pod downwardapi-volume-c94eb185-9375-4c3f-a5ab-a50580576f0e to disappear
Sep 17 01:09:16.819: INFO: Pod downwardapi-volume-c94eb185-9375-4c3f-a5ab-a50580576f0e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:09:16.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7760" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":256,"skipped":5103,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 01:09:16.827: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename webhook
... skipping 5 lines ...
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Sep 17 01:09:17.503: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Sep 17 01:09:20.574: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:09:20.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7685" for this suite.
STEP: Destroying namespace "webhook-7685-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":346,"completed":257,"skipped":5130,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 30 lines ...
• [SLOW TEST:9.586 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":346,"completed":258,"skipped":5134,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 32 lines ...
• [SLOW TEST:11.026 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":346,"completed":259,"skipped":5142,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 01:09:41.465: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142
[It] should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Sep 17 01:09:41.613: INFO: DaemonSet pods can't tolerate node kt2-280c76ac-1743-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 01:09:41.617: INFO: Number of nodes with available pods: 0
Sep 17 01:09:41.617: INFO: Node kt2-280c76ac-1743-minion-group-8xgx is running more than one daemon pod
... skipping 3 lines ...
Sep 17 01:09:43.623: INFO: DaemonSet pods can't tolerate node kt2-280c76ac-1743-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 01:09:43.628: INFO: Number of nodes with available pods: 2
Sep 17 01:09:43.628: INFO: Node kt2-280c76ac-1743-minion-group-rr86 is running more than one daemon pod
Sep 17 01:09:44.623: INFO: DaemonSet pods can't tolerate node kt2-280c76ac-1743-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 01:09:44.628: INFO: Number of nodes with available pods: 3
Sep 17 01:09:44.628: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Sep 17 01:09:44.663: INFO: DaemonSet pods can't tolerate node kt2-280c76ac-1743-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 01:09:44.678: INFO: Number of nodes with available pods: 2
Sep 17 01:09:44.678: INFO: Node kt2-280c76ac-1743-minion-group-xp78 is running more than one daemon pod
Sep 17 01:09:45.691: INFO: DaemonSet pods can't tolerate node kt2-280c76ac-1743-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 01:09:45.696: INFO: Number of nodes with available pods: 2
Sep 17 01:09:45.696: INFO: Node kt2-280c76ac-1743-minion-group-xp78 is running more than one daemon pod
Sep 17 01:09:46.690: INFO: DaemonSet pods can't tolerate node kt2-280c76ac-1743-master with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>} {Key:node.kubernetes.io/unschedulable Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
Sep 17 01:09:46.702: INFO: Number of nodes with available pods: 3
Sep 17 01:09:46.702: INFO: Number of running nodes: 3, number of available pods: 3
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8706, will wait for the garbage collector to delete the pods
Sep 17 01:09:46.798: INFO: Deleting DaemonSet.extensions daemon-set took: 22.332547ms
Sep 17 01:09:46.899: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.807843ms
... skipping 8 lines ...
Sep 17 01:09:49.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8706" for this suite.

• [SLOW TEST:8.420 seconds]
[sig-apps] Daemon set [Serial]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":260,"skipped":5182,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name projected-secret-test-fc978bc8-55e5-474c-b90f-da6b1f1903ed
STEP: Creating a pod to test consume secrets
Sep 17 01:09:49.987: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-60f801c4-0739-4936-95a2-4b9a60e8ea7b" in namespace "projected-8655" to be "Succeeded or Failed"
Sep 17 01:09:49.993: INFO: Pod "pod-projected-secrets-60f801c4-0739-4936-95a2-4b9a60e8ea7b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.869278ms
Sep 17 01:09:52.000: INFO: Pod "pod-projected-secrets-60f801c4-0739-4936-95a2-4b9a60e8ea7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012771065s
STEP: Saw pod success
Sep 17 01:09:52.000: INFO: Pod "pod-projected-secrets-60f801c4-0739-4936-95a2-4b9a60e8ea7b" satisfied condition "Succeeded or Failed"
Sep 17 01:09:52.003: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-secrets-60f801c4-0739-4936-95a2-4b9a60e8ea7b container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 01:09:52.027: INFO: Waiting for pod pod-projected-secrets-60f801c4-0739-4936-95a2-4b9a60e8ea7b to disappear
Sep 17 01:09:52.032: INFO: Pod pod-projected-secrets-60f801c4-0739-4936-95a2-4b9a60e8ea7b no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:09:52.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8655" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":261,"skipped":5185,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 51 lines ...
• [SLOW TEST:21.390 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":262,"skipped":5217,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Service endpoints latency
... skipping 424 lines ...
• [SLOW TEST:10.898 seconds]
[sig-network] Service endpoints latency
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should not be very high  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":346,"completed":263,"skipped":5232,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should validate Statefulset Status endpoints [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 42 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should validate Statefulset Status endpoints [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":264,"skipped":5271,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] IngressClass API 
   should support creating IngressClass API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] IngressClass API
... skipping 21 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] IngressClass API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:10:45.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingressclass-6102" for this suite.
•{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":346,"completed":265,"skipped":5281,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 11 lines ...
STEP: creating replication controller affinity-clusterip in namespace services-47
I0917 01:10:45.136087   97243 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-47, replica count: 3
I0917 01:10:48.187217   97243 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Sep 17 01:10:48.196: INFO: Creating new exec pod
Sep 17 01:10:51.245: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-47 exec execpod-affinity75x2z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
Sep 17 01:10:52.509: INFO: rc: 1
Sep 17 01:10:52.509: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-47 exec execpod-affinity75x2z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 01:10:53.509: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-47 exec execpod-affinity75x2z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
Sep 17 01:10:54.751: INFO: rc: 1
Sep 17 01:10:54.751: INFO: Service reachability failing with error: error running /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-47 exec execpod-affinity75x2z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80:
Command stdout:

stderr:
+ echo hostName
+ nc -v -t -w 2 affinity-clusterip 80
nc: connect to affinity-clusterip port 80 (tcp) failed: Connection refused
command terminated with exit code 1

error:
exit status 1
Retrying...
Sep 17 01:10:55.510: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-47 exec execpod-affinity75x2z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80'
Sep 17 01:10:56.798: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n"
Sep 17 01:10:56.798: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request"
Sep 17 01:10:56.798: INFO: Running '/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubectl --server=https://35.222.74.146 --kubeconfig=/logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig --namespace=services-47 exec execpod-affinity75x2z -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.0.47.111 80'
... skipping 32 lines ...
• [SLOW TEST:14.449 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":266,"skipped":5293,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
  runs ReplicaSets to verify preemption running path [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 42 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  PreemptionExecutionPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451
    runs ReplicaSets to verify preemption running path [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":267,"skipped":5304,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-projected-7khw
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 01:12:25.041: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7khw" in namespace "subpath-5994" to be "Succeeded or Failed"
Sep 17 01:12:25.044: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.877885ms
Sep 17 01:12:27.049: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 2.008568517s
Sep 17 01:12:29.054: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 4.013207296s
Sep 17 01:12:31.061: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 6.020899779s
Sep 17 01:12:33.066: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 8.025185548s
Sep 17 01:12:35.070: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 10.029333596s
Sep 17 01:12:37.074: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 12.033738901s
Sep 17 01:12:39.079: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 14.038333938s
Sep 17 01:12:41.084: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 16.043669089s
Sep 17 01:12:43.089: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 18.048850003s
Sep 17 01:12:45.115: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Running", Reason="", readiness=true. Elapsed: 20.07405117s
Sep 17 01:12:47.123: INFO: Pod "pod-subpath-test-projected-7khw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.0825531s
STEP: Saw pod success
Sep 17 01:12:47.123: INFO: Pod "pod-subpath-test-projected-7khw" satisfied condition "Succeeded or Failed"
Sep 17 01:12:47.129: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod pod-subpath-test-projected-7khw container test-container-subpath-projected-7khw: <nil>
STEP: delete the pod
Sep 17 01:12:47.184: INFO: Waiting for pod pod-subpath-test-projected-7khw to disappear
Sep 17 01:12:47.188: INFO: Pod pod-subpath-test-projected-7khw no longer exists
STEP: Deleting pod pod-subpath-test-projected-7khw
Sep 17 01:12:47.188: INFO: Deleting pod "pod-subpath-test-projected-7khw" in namespace "subpath-5994"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":268,"skipped":5315,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected downwardAPI
... skipping 18 lines ...
• [SLOW TEST:6.690 seconds]
[sig-storage] Projected downwardAPI
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
  should update annotations on modification [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":269,"skipped":5346,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Lease 
  lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Lease
... skipping 5 lines ...
[It] lease API should be available [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Lease
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:12:53.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-6479" for this suite.
•{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":346,"completed":270,"skipped":5367,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-node] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Docker Containers
... skipping 2 lines ...
Sep 17 01:12:53.999: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test override arguments
Sep 17 01:12:54.053: INFO: Waiting up to 5m0s for pod "client-containers-44894404-0635-42f0-871f-72437be8a3f6" in namespace "containers-8259" to be "Succeeded or Failed"
Sep 17 01:12:54.062: INFO: Pod "client-containers-44894404-0635-42f0-871f-72437be8a3f6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.870739ms
Sep 17 01:12:56.067: INFO: Pod "client-containers-44894404-0635-42f0-871f-72437be8a3f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014221527s
STEP: Saw pod success
Sep 17 01:12:56.067: INFO: Pod "client-containers-44894404-0635-42f0-871f-72437be8a3f6" satisfied condition "Succeeded or Failed"
Sep 17 01:12:56.071: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod client-containers-44894404-0635-42f0-871f-72437be8a3f6 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 01:12:56.092: INFO: Waiting for pod client-containers-44894404-0635-42f0-871f-72437be8a3f6 to disappear
Sep 17 01:12:56.097: INFO: Pod client-containers-44894404-0635-42f0-871f-72437be8a3f6 no longer exists
[AfterEach] [sig-node] Docker Containers
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:12:56.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8259" for this suite.
•{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":346,"completed":271,"skipped":5371,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should succeed in writing subpaths in container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
... skipping 26 lines ...
• [SLOW TEST:36.963 seconds]
[sig-node] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should succeed in writing subpaths in container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":346,"completed":272,"skipped":5418,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Networking
... skipping 49 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
  Granular Checks: Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":346,"completed":273,"skipped":5419,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-node] Sysctls [LinuxOnly] [NodeConformance] 
  should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
... skipping 6 lines ...
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65
[It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
STEP: Watching for error events or started pod
STEP: Waiting for pod completion
STEP: Checking that the pod succeeded
STEP: Getting logs from the pod
STEP: Checking that the sysctl is actually updated
[AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:13:57.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sysctl-4996" for this suite.
•{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":274,"skipped":5421,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected secret
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating projection with secret that has name projected-secret-test-f77491d7-c846-45ea-a229-323e533f22c5
STEP: Creating a pod to test consume secrets
Sep 17 01:13:58.055: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ec6d4e75-f1f1-4eb8-9ac3-0af0580085e4" in namespace "projected-5054" to be "Succeeded or Failed"
Sep 17 01:13:58.061: INFO: Pod "pod-projected-secrets-ec6d4e75-f1f1-4eb8-9ac3-0af0580085e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.81868ms
Sep 17 01:14:00.066: INFO: Pod "pod-projected-secrets-ec6d4e75-f1f1-4eb8-9ac3-0af0580085e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011638236s
STEP: Saw pod success
Sep 17 01:14:00.066: INFO: Pod "pod-projected-secrets-ec6d4e75-f1f1-4eb8-9ac3-0af0580085e4" satisfied condition "Succeeded or Failed"
Sep 17 01:14:00.070: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-secrets-ec6d4e75-f1f1-4eb8-9ac3-0af0580085e4 container projected-secret-volume-test: <nil>
STEP: delete the pod
Sep 17 01:14:00.092: INFO: Waiting for pod pod-projected-secrets-ec6d4e75-f1f1-4eb8-9ac3-0af0580085e4 to disappear
Sep 17 01:14:00.095: INFO: Pod pod-projected-secrets-ec6d4e75-f1f1-4eb8-9ac3-0af0580085e4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:14:00.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5054" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":275,"skipped":5422,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 20 lines ...
• [SLOW TEST:17.113 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":276,"skipped":5436,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  should run the lifecycle of a Deployment [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 113 lines ...
• [SLOW TEST:7.796 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run the lifecycle of a Deployment [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":277,"skipped":5461,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Sep 17 01:14:25.193: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:14:28.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7964" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":278,"skipped":5466,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 35 lines ...
• [SLOW TEST:65.865 seconds]
[sig-storage] EmptyDir wrapper volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":279,"skipped":5482,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:15:36.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4930" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":280,"skipped":5505,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Variable Expansion 
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Variable Expansion
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 01:15:36.541: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: creating the pod with failed condition
I0917 01:17:01.923410    2918 boskos.go:86] Sending heartbeat to Boskos
STEP: updating the pod
Sep 17 01:17:37.145: INFO: Successfully updated pod "var-expansion-c9dfd20b-a9bb-41d2-8830-71883cf9b48d"
STEP: waiting for pod running
STEP: deleting the pod gracefully
Sep 17 01:17:39.158: INFO: Deleting pod "var-expansion-c9dfd20b-a9bb-41d2-8830-71883cf9b48d" in namespace "var-expansion-8420"
... skipping 6 lines ...
• [SLOW TEST:154.650 seconds]
[sig-node] Variable Expansion
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":281,"skipped":5551,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
... skipping 12 lines ...
Sep 17 01:18:13.339: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:18:13.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-542" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":346,"completed":282,"skipped":5565,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 41 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1238
    should create services for rc  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":346,"completed":283,"skipped":5577,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-network] EndpointSlice 
  should have Endpoints and EndpointSlices pointing to API Server [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 7 lines ...
Sep 17 01:18:19.201: INFO: Endpoints addresses: [35.222.74.146] , ports: [443]
Sep 17 01:18:19.201: INFO: EndpointSlices addresses: [35.222.74.146] , ports: [443]
[AfterEach] [sig-network] EndpointSlice
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:18:19.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "endpointslice-8693" for this suite.
•{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":346,"completed":284,"skipped":5577,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Deployment
... skipping 92 lines ...
&Pod{ObjectMeta:{webserver-deployment-795d758f88-bzth4 webserver-deployment-795d758f88- deployment-9244  57f54bf4-06b0-4683-af8d-f16f540d7e0b 24750 0 2021-09-17 01:18:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 89fa8451-588a-4ab1-8cbb-d20fb24fa712 0xc0061df9c0 0xc0061df9c1}] []  [{kube-controller-manager Update v1 2021-09-17 01:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fa8451-588a-4ab1-8cbb-d20fb24fa712\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:18:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rkmn6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rkmn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-8xgx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-17 01:18:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 01:18:27.616: INFO: Pod "webserver-deployment-795d758f88-gkm7c" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-gkm7c webserver-deployment-795d758f88- deployment-9244  981a6563-d8c2-4148-82fb-d2680ba5ad88 24762 0 2021-09-17 01:18:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 89fa8451-588a-4ab1-8cbb-d20fb24fa712 0xc0061dfb90 0xc0061dfb91}] []  [{kube-controller-manager Update v1 2021-09-17 01:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fa8451-588a-4ab1-8cbb-d20fb24fa712\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:18:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fw2kv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fw2kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-8xgx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-17 01:18:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 01:18:27.616: INFO: Pod "webserver-deployment-795d758f88-gzfp7" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-gzfp7 webserver-deployment-795d758f88- deployment-9244  1aa10ee2-f433-4aad-a98d-045b9674e4b2 24861 0 2021-09-17 01:18:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 89fa8451-588a-4ab1-8cbb-d20fb24fa712 0xc0061dfd70 0xc0061dfd71}] []  [{kube-controller-manager Update v1 2021-09-17 01:18:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fa8451-588a-4ab1-8cbb-d20fb24fa712\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:18:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x9g8x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x9g8x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-rr86,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.5,PodIP:,StartTime:2021-09-17 01:18:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 01:18:27.617: INFO: Pod "webserver-deployment-795d758f88-llxmk" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-llxmk webserver-deployment-795d758f88- deployment-9244  a97f071c-e418-447b-97aa-cfd906a29599 24855 0 2021-09-17 01:18:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 89fa8451-588a-4ab1-8cbb-d20fb24fa712 0xc0061dff40 0xc0061dff41}] []  [{kube-controller-manager Update v1 2021-09-17 01:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fa8451-588a-4ab1-8cbb-d20fb24fa712\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:18:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tqrdr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tqrdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-xp78,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:10.64.3.15,StartTime:2021-09-17 01:18:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 01:18:27.617: INFO: Pod "webserver-deployment-795d758f88-qj6r7" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-qj6r7 webserver-deployment-795d758f88- deployment-9244  c84088e1-81aa-49bb-8a40-9b958d23a7a1 24845 0 2021-09-17 01:18:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 89fa8451-588a-4ab1-8cbb-d20fb24fa712 0xc006764140 0xc006764141}] []  [{kube-controller-manager Update v1 2021-09-17 01:18:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fa8451-588a-4ab1-8cbb-d20fb24fa712\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:18:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.64.3.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j8blf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8blf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-xp78,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.3,PodIP:10.64.3.16,StartTime:2021-09-17 01:18:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.64.3.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 01:18:27.617: INFO: Pod "webserver-deployment-795d758f88-ql6cj" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-ql6cj webserver-deployment-795d758f88- deployment-9244  438647e0-f6c9-4db2-b071-035ff836647f 24860 0 2021-09-17 01:18:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 89fa8451-588a-4ab1-8cbb-d20fb24fa712 0xc006764380 0xc006764381}] []  [{kube-controller-manager Update v1 2021-09-17 01:18:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fa8451-588a-4ab1-8cbb-d20fb24fa712\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:18:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-btjmq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-btjmq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-8xgx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-17 01:18:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Sep 17 01:18:27.617: INFO: Pod "webserver-deployment-795d758f88-x745g" is not available:
&Pod{ObjectMeta:{webserver-deployment-795d758f88-x745g webserver-deployment-795d758f88- deployment-9244  01ec3df8-9d14-467c-a9c5-d012f3f2312e 24844 0 2021-09-17 01:18:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 89fa8451-588a-4ab1-8cbb-d20fb24fa712 0xc006764550 0xc006764551}] []  [{kube-controller-manager Update v1 2021-09-17 01:18:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"89fa8451-588a-4ab1-8cbb-d20fb24fa712\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2021-09-17 01:18:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vjjr6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vjjr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kt2-280c76ac-1743-minion-group-8xgx,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-09-17 01:18:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.128.0.4,PodIP:,StartTime:2021-09-17 01:18:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
... skipping 3 lines ...
• [SLOW TEST:8.414 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":285,"skipped":5577,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Secrets 
  should patch a secret [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Secrets
... skipping 10 lines ...
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-node] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:18:27.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9562" for this suite.
•{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":346,"completed":286,"skipped":5593,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 15 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":346,"completed":287,"skipped":5596,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 18 lines ...
• [SLOW TEST:22.209 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":288,"skipped":5620,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name configmap-test-volume-map-01d29ff2-5b09-4280-8ddd-b63f11bda010
STEP: Creating a pod to test consume configMaps
Sep 17 01:19:05.084: INFO: Waiting up to 5m0s for pod "pod-configmaps-abd3d216-ad4b-4a25-888c-d707948accc6" in namespace "configmap-9578" to be "Succeeded or Failed"
Sep 17 01:19:05.091: INFO: Pod "pod-configmaps-abd3d216-ad4b-4a25-888c-d707948accc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.399081ms
Sep 17 01:19:07.096: INFO: Pod "pod-configmaps-abd3d216-ad4b-4a25-888c-d707948accc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011417701s
STEP: Saw pod success
Sep 17 01:19:07.096: INFO: Pod "pod-configmaps-abd3d216-ad4b-4a25-888c-d707948accc6" satisfied condition "Succeeded or Failed"
Sep 17 01:19:07.098: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-configmaps-abd3d216-ad4b-4a25-888c-d707948accc6 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 01:19:07.117: INFO: Waiting for pod pod-configmaps-abd3d216-ad4b-4a25-888c-d707948accc6 to disappear
Sep 17 01:19:07.122: INFO: Pod pod-configmaps-abd3d216-ad4b-4a25-888c-d707948accc6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:07.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9578" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":289,"skipped":5640,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] version v1
... skipping 38 lines ...
Sep 17 01:19:09.395: INFO: Starting http.Client for https://35.222.74.146/api/v1/namespaces/proxy-538/services/test-service/proxy/some/path/with/PUT
Sep 17 01:19:09.406: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT
[AfterEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:09.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-538" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":290,"skipped":5655,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] ConfigMap
... skipping 11 lines ...
STEP: Updating configmap configmap-test-upd-6bb1f0d1-5d34-4c6e-a702-67cbd290ed74
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:13.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2850" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":291,"skipped":5665,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Ingress API 
  should support creating Ingress API operations [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Ingress API
... skipping 25 lines ...
STEP: deleting
STEP: deleting a collection
[AfterEach] [sig-network] Ingress API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:13.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "ingress-5323" for this suite.
•{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":346,"completed":292,"skipped":5684,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-9e0a4fca-843f-4ed9-9082-a917ebd0270d
STEP: Creating a pod to test consume configMaps
Sep 17 01:19:13.801: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ad6c2b1-67e6-4784-abd7-8926eba0250c" in namespace "projected-3995" to be "Succeeded or Failed"
Sep 17 01:19:13.806: INFO: Pod "pod-projected-configmaps-7ad6c2b1-67e6-4784-abd7-8926eba0250c": Phase="Pending", Reason="", readiness=false. Elapsed: 5.345059ms
Sep 17 01:19:15.812: INFO: Pod "pod-projected-configmaps-7ad6c2b1-67e6-4784-abd7-8926eba0250c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010922647s
STEP: Saw pod success
Sep 17 01:19:15.812: INFO: Pod "pod-projected-configmaps-7ad6c2b1-67e6-4784-abd7-8926eba0250c" satisfied condition "Succeeded or Failed"
Sep 17 01:19:15.814: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod pod-projected-configmaps-7ad6c2b1-67e6-4784-abd7-8926eba0250c container agnhost-container: <nil>
STEP: delete the pod
Sep 17 01:19:15.849: INFO: Waiting for pod pod-projected-configmaps-7ad6c2b1-67e6-4784-abd7-8926eba0250c to disappear
Sep 17 01:19:15.852: INFO: Pod pod-projected-configmaps-7ad6c2b1-67e6-4784-abd7-8926eba0250c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:15.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3995" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":293,"skipped":5718,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Kubelet
... skipping 9 lines ...
[It] should have an terminated reason [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-node] Kubelet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:19.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9937" for this suite.
•{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":346,"completed":294,"skipped":5725,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-cac61db9-88f0-460a-b00d-bc9acb370be6
STEP: Creating a pod to test consume configMaps
Sep 17 01:19:20.035: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b5f8051d-239e-499b-abf6-dd665979cc1e" in namespace "projected-7228" to be "Succeeded or Failed"
Sep 17 01:19:20.043: INFO: Pod "pod-projected-configmaps-b5f8051d-239e-499b-abf6-dd665979cc1e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.06448ms
Sep 17 01:19:22.049: INFO: Pod "pod-projected-configmaps-b5f8051d-239e-499b-abf6-dd665979cc1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013677964s
STEP: Saw pod success
Sep 17 01:19:22.049: INFO: Pod "pod-projected-configmaps-b5f8051d-239e-499b-abf6-dd665979cc1e" satisfied condition "Succeeded or Failed"
Sep 17 01:19:22.053: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-configmaps-b5f8051d-239e-499b-abf6-dd665979cc1e container agnhost-container: <nil>
STEP: delete the pod
Sep 17 01:19:22.078: INFO: Waiting for pod pod-projected-configmaps-b5f8051d-239e-499b-abf6-dd665979cc1e to disappear
Sep 17 01:19:22.082: INFO: Pod pod-projected-configmaps-b5f8051d-239e-499b-abf6-dd665979cc1e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:22.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7228" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":295,"skipped":5735,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 30 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  when create a pod with lifecycle hook
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":296,"skipped":5737,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 11 lines ...
STEP: Updating configmap projected-configmap-test-upd-707dc989-6e13-46ce-b5eb-41cae6fa24e5
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:19:34.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2285" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":297,"skipped":5746,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Watchers
... skipping 33 lines ...
• [SLOW TEST:20.114 seconds]
[sig-api-machinery] Watchers
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":298,"skipped":5753,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 15 lines ...
• [SLOW TEST:15.794 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":299,"skipped":5786,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Garbage collector
... skipping 15 lines ...
• [SLOW TEST:5.329 seconds]
[sig-api-machinery] Garbage collector
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":346,"completed":300,"skipped":5792,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] Job
... skipping 27 lines ...
• [SLOW TEST:5.274 seconds]
[sig-apps] Job
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":301,"skipped":5799,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
• [SLOW TEST:7.258 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":302,"skipped":5837,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 12 lines ...
STEP: Creating configMap with name cm-test-opt-create-6384a45e-4bfc-483c-a90c-62e4a3f681ba
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:20:32.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9774" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":303,"skipped":5837,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  Replicaset should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicaSet
... skipping 20 lines ...
• [SLOW TEST:5.171 seconds]
[sig-apps] ReplicaSet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Replicaset should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":304,"skipped":5878,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
S
------------------------------
[sig-network] Services 
  should complete a service status lifecycle [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 42 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:20:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9297" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":346,"completed":305,"skipped":5879,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 15 lines ...
• [SLOW TEST:7.130 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":346,"completed":306,"skipped":5881,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 26 lines ...
Sep 17 01:20:47.510: INFO: Selector matched 1 pods for map[app:agnhost]
Sep 17 01:20:47.510: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:20:47.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6436" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":346,"completed":307,"skipped":5889,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] CronJob 
  should schedule multiple jobs concurrently [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] CronJob
... skipping 16 lines ...
• [SLOW TEST:74.120 seconds]
[sig-apps] CronJob
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should schedule multiple jobs concurrently [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":346,"completed":308,"skipped":5951,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-node] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 01:22:01.641: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: create the container
STEP: wait for the container to reach Failed
I0917 01:22:01.944733    2918 boskos.go:86] Sending heartbeat to Boskos
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Sep 17 01:22:03.757: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [sig-node] Container Runtime
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:22:03.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5078" for this suite.
•{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":309,"skipped":5951,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should find a service from listing all namespaces [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 10 lines ...
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:22:03.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7713" for this suite.
[AfterEach] [sig-network] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753
•{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":310,"skipped":5959,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 01:22:03.844: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0644 on node default medium
Sep 17 01:22:03.890: INFO: Waiting up to 5m0s for pod "pod-e164a8b9-e780-4eeb-9037-feff1dfd0b89" in namespace "emptydir-4291" to be "Succeeded or Failed"
Sep 17 01:22:03.896: INFO: Pod "pod-e164a8b9-e780-4eeb-9037-feff1dfd0b89": Phase="Pending", Reason="", readiness=false. Elapsed: 5.565089ms
Sep 17 01:22:05.901: INFO: Pod "pod-e164a8b9-e780-4eeb-9037-feff1dfd0b89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010307582s
STEP: Saw pod success
Sep 17 01:22:05.901: INFO: Pod "pod-e164a8b9-e780-4eeb-9037-feff1dfd0b89" satisfied condition "Succeeded or Failed"
Sep 17 01:22:05.910: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-e164a8b9-e780-4eeb-9037-feff1dfd0b89 container test-container: <nil>
STEP: delete the pod
Sep 17 01:22:05.950: INFO: Waiting for pod pod-e164a8b9-e780-4eeb-9037-feff1dfd0b89 to disappear
Sep 17 01:22:05.955: INFO: Pod pod-e164a8b9-e780-4eeb-9037-feff1dfd0b89 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:22:05.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4291" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":311,"skipped":5978,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 13 lines ...
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Sep 17 01:22:08.620: INFO: Successfully updated pod "pod-update-activedeadlineseconds-132408fd-6c90-4e43-a8f9-596414e1f599"
Sep 17 01:22:08.620: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-132408fd-6c90-4e43-a8f9-596414e1f599" in namespace "pods-8758" to be "terminated due to deadline exceeded"
Sep 17 01:22:08.629: INFO: Pod "pod-update-activedeadlineseconds-132408fd-6c90-4e43-a8f9-596414e1f599": Phase="Running", Reason="", readiness=true. Elapsed: 8.415828ms
Sep 17 01:22:10.633: INFO: Pod "pod-update-activedeadlineseconds-132408fd-6c90-4e43-a8f9-596414e1f599": Phase="Running", Reason="", readiness=true. Elapsed: 2.012637949s
Sep 17 01:22:12.637: INFO: Pod "pod-update-activedeadlineseconds-132408fd-6c90-4e43-a8f9-596414e1f599": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.016930679s
Sep 17 01:22:12.637: INFO: Pod "pod-update-activedeadlineseconds-132408fd-6c90-4e43-a8f9-596414e1f599" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [sig-node] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:22:12.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8758" for this suite.

• [SLOW TEST:6.668 seconds]
[sig-node] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":312,"skipped":6003,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Projected configMap
... skipping 3 lines ...
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap with name projected-configmap-test-volume-map-4e80e322-573b-438f-a7d8-5d8aa767ce8a
STEP: Creating a pod to test consume configMaps
Sep 17 01:22:12.699: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bca656e2-f15e-48d3-8f60-5d3899485d91" in namespace "projected-8840" to be "Succeeded or Failed"
Sep 17 01:22:12.704: INFO: Pod "pod-projected-configmaps-bca656e2-f15e-48d3-8f60-5d3899485d91": Phase="Pending", Reason="", readiness=false. Elapsed: 5.155326ms
Sep 17 01:22:14.709: INFO: Pod "pod-projected-configmaps-bca656e2-f15e-48d3-8f60-5d3899485d91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009465029s
STEP: Saw pod success
Sep 17 01:22:14.709: INFO: Pod "pod-projected-configmaps-bca656e2-f15e-48d3-8f60-5d3899485d91" satisfied condition "Succeeded or Failed"
Sep 17 01:22:14.713: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-projected-configmaps-bca656e2-f15e-48d3-8f60-5d3899485d91 container agnhost-container: <nil>
STEP: delete the pod
Sep 17 01:22:14.741: INFO: Waiting for pod pod-projected-configmaps-bca656e2-f15e-48d3-8f60-5d3899485d91 to disappear
Sep 17 01:22:14.746: INFO: Pod pod-projected-configmaps-bca656e2-f15e-48d3-8f60-5d3899485d91 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:22:14.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8840" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":313,"skipped":6013,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSS
------------------------------
[sig-node] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 19 lines ...
• [SLOW TEST:243.103 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":314,"skipped":6018,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 47 lines ...
Sep 17 01:26:20.730: INFO: stderr: ""
Sep 17 01:26:20.730: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:26:20.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-238" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":346,"completed":315,"skipped":6073,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 17 lines ...
• [SLOW TEST:27.469 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":346,"completed":316,"skipped":6084,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-node] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Security Context
... skipping 3 lines ...
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 01:26:48.275: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-3f7f5e9a-c452-4a21-a139-1b2c022105f9" in namespace "security-context-test-2762" to be "Succeeded or Failed"
Sep 17 01:26:48.282: INFO: Pod "alpine-nnp-false-3f7f5e9a-c452-4a21-a139-1b2c022105f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.613598ms
Sep 17 01:26:50.295: INFO: Pod "alpine-nnp-false-3f7f5e9a-c452-4a21-a139-1b2c022105f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019509744s
Sep 17 01:26:52.299: INFO: Pod "alpine-nnp-false-3f7f5e9a-c452-4a21-a139-1b2c022105f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023764694s
Sep 17 01:26:52.299: INFO: Pod "alpine-nnp-false-3f7f5e9a-c452-4a21-a139-1b2c022105f9" satisfied condition "Succeeded or Failed"
[AfterEach] [sig-node] Security Context
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:26:52.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2762" for this suite.
•{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":317,"skipped":6087,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 7 lines ...
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:26:52.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-5517" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":346,"completed":318,"skipped":6091,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Pods
... skipping 22 lines ...
• [SLOW TEST:5.004 seconds]
[sig-node] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  should be submitted and removed [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":319,"skipped":6101,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Events
... skipping 24 lines ...
• [SLOW TEST:6.255 seconds]
[sig-node] Events
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":346,"completed":320,"skipped":6125,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-node] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 9 lines ...
STEP: creating the pod
Sep 17 01:27:03.738: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [sig-node] InitContainer [NodeConformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:27:07.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7839" for this suite.
•{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":346,"completed":321,"skipped":6135,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
Sep 17 01:27:07.398: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating configMap that has name configmap-test-emptyKey-c0595ab9-bdc1-4536-b868-bb7e48f8e692
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:27:07.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3252" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":346,"completed":322,"skipped":6141,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] StatefulSet
... skipping 33 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:97
    should have a working scale subresource [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":346,"completed":323,"skipped":6149,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 21 lines ...
• [SLOW TEST:10.107 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":346,"completed":324,"skipped":6188,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 11 lines ...
Sep 17 01:27:37.905: INFO: stderr: ""
Sep 17 01:27:37.905: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncloud.google.com/v1\ncloud.google.com/v1beta1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nmetrics.k8s.io/v1beta1\nnetworking.gke.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscalingpolicy.kope.io/v1alpha1\nscheduling.k8s.io/v1\nsnapshot.storage.k8s.io/v1\nsnapshot.storage.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:27:37.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9963" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":346,"completed":325,"skipped":6223,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-7c14f891-c4e6-48f8-a9a4-5252d78820be
STEP: Creating a pod to test consume secrets
Sep 17 01:27:37.964: INFO: Waiting up to 5m0s for pod "pod-secrets-d8ee7090-d913-4be3-b1ab-ee4646bd2fd4" in namespace "secrets-635" to be "Succeeded or Failed"
Sep 17 01:27:37.969: INFO: Pod "pod-secrets-d8ee7090-d913-4be3-b1ab-ee4646bd2fd4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.056431ms
Sep 17 01:27:39.974: INFO: Pod "pod-secrets-d8ee7090-d913-4be3-b1ab-ee4646bd2fd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009588005s
STEP: Saw pod success
Sep 17 01:27:39.974: INFO: Pod "pod-secrets-d8ee7090-d913-4be3-b1ab-ee4646bd2fd4" satisfied condition "Succeeded or Failed"
Sep 17 01:27:39.976: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-secrets-d8ee7090-d913-4be3-b1ab-ee4646bd2fd4 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 01:27:39.996: INFO: Waiting for pod pod-secrets-d8ee7090-d913-4be3-b1ab-ee4646bd2fd4 to disappear
Sep 17 01:27:40.000: INFO: Pod pod-secrets-d8ee7090-d913-4be3-b1ab-ee4646bd2fd4 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:27:40.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-635" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":326,"skipped":6245,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSS
------------------------------
[sig-network] Services 
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-network] Services
... skipping 69 lines ...
• [SLOW TEST:55.850 seconds]
[sig-network] Services
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
  should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":327,"skipped":6251,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] ReplicationController
... skipping 19 lines ...
• [SLOW TEST:5.247 seconds]
[sig-apps] ReplicationController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":328,"skipped":6254,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 10 lines ...
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:28:43.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1766" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":329,"skipped":6290,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Downward API
... skipping 2 lines ...
Sep 17 01:28:43.388: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test downward api env vars
Sep 17 01:28:43.621: INFO: Waiting up to 5m0s for pod "downward-api-2a3c0a7c-a103-4d83-bb54-389f30bd257c" in namespace "downward-api-3523" to be "Succeeded or Failed"
Sep 17 01:28:43.629: INFO: Pod "downward-api-2a3c0a7c-a103-4d83-bb54-389f30bd257c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.714691ms
Sep 17 01:28:45.635: INFO: Pod "downward-api-2a3c0a7c-a103-4d83-bb54-389f30bd257c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013823055s
STEP: Saw pod success
Sep 17 01:28:45.635: INFO: Pod "downward-api-2a3c0a7c-a103-4d83-bb54-389f30bd257c" satisfied condition "Succeeded or Failed"
Sep 17 01:28:45.641: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod downward-api-2a3c0a7c-a103-4d83-bb54-389f30bd257c container dapi-container: <nil>
STEP: delete the pod
Sep 17 01:28:45.696: INFO: Waiting for pod downward-api-2a3c0a7c-a103-4d83-bb54-389f30bd257c to disappear
Sep 17 01:28:45.701: INFO: Pod downward-api-2a3c0a7c-a103-4d83-bb54-389f30bd257c no longer exists
[AfterEach] [sig-node] Downward API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:28:45.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3523" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":346,"completed":330,"skipped":6301,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}

------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Secrets
... skipping 3 lines ...
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating secret with name secret-test-bb12bc1f-5aaf-42a5-89b0-4e7d4ea5f2f4
STEP: Creating a pod to test consume secrets
Sep 17 01:28:45.770: INFO: Waiting up to 5m0s for pod "pod-secrets-2113fb2e-aa7f-41ad-a1fa-ee924add0572" in namespace "secrets-7302" to be "Succeeded or Failed"
Sep 17 01:28:45.778: INFO: Pod "pod-secrets-2113fb2e-aa7f-41ad-a1fa-ee924add0572": Phase="Pending", Reason="", readiness=false. Elapsed: 7.734853ms
Sep 17 01:28:47.784: INFO: Pod "pod-secrets-2113fb2e-aa7f-41ad-a1fa-ee924add0572": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013361643s
STEP: Saw pod success
Sep 17 01:28:47.784: INFO: Pod "pod-secrets-2113fb2e-aa7f-41ad-a1fa-ee924add0572" satisfied condition "Succeeded or Failed"
Sep 17 01:28:47.790: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-8xgx pod pod-secrets-2113fb2e-aa7f-41ad-a1fa-ee924add0572 container secret-volume-test: <nil>
STEP: delete the pod
Sep 17 01:28:47.808: INFO: Waiting for pod pod-secrets-2113fb2e-aa7f-41ad-a1fa-ee924add0572 to disappear
Sep 17 01:28:47.813: INFO: Pod pod-secrets-2113fb2e-aa7f-41ad-a1fa-ee924add0572 no longer exists
[AfterEach] [sig-storage] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:28:47.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7302" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":331,"skipped":6301,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 72 lines ...
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:28:51.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4656" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":346,"completed":332,"skipped":6335,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 01:28:52.001: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0666 on node default medium
Sep 17 01:28:52.068: INFO: Waiting up to 5m0s for pod "pod-5eb13863-5c3c-477f-8b91-b8533d0d751d" in namespace "emptydir-2050" to be "Succeeded or Failed"
Sep 17 01:28:52.073: INFO: Pod "pod-5eb13863-5c3c-477f-8b91-b8533d0d751d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.845552ms
Sep 17 01:28:54.077: INFO: Pod "pod-5eb13863-5c3c-477f-8b91-b8533d0d751d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00823616s
STEP: Saw pod success
Sep 17 01:28:54.077: INFO: Pod "pod-5eb13863-5c3c-477f-8b91-b8533d0d751d" satisfied condition "Succeeded or Failed"
Sep 17 01:28:54.080: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-5eb13863-5c3c-477f-8b91-b8533d0d751d container test-container: <nil>
STEP: delete the pod
Sep 17 01:28:54.098: INFO: Waiting for pod pod-5eb13863-5c3c-477f-8b91-b8533d0d751d to disappear
Sep 17 01:28:54.103: INFO: Pod pod-5eb13863-5c3c-477f-8b91-b8533d0d751d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:28:54.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2050" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":333,"skipped":6345,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSS
------------------------------
[sig-apps] DisruptionController 
  should block an eviction until the PDB is updated to allow it [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-apps] DisruptionController
... skipping 34 lines ...
• [SLOW TEST:8.436 seconds]
[sig-apps] DisruptionController
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should block an eviction until the PDB is updated to allow it [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":346,"completed":334,"skipped":6349,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] Subpath
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating pod pod-subpath-test-secret-9vcf
STEP: Creating a pod to test atomic-volume-subpath
Sep 17 01:29:02.622: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-9vcf" in namespace "subpath-2131" to be "Succeeded or Failed"
Sep 17 01:29:02.633: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.461128ms
Sep 17 01:29:04.637: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 2.014379161s
Sep 17 01:29:06.642: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 4.019441842s
Sep 17 01:29:08.646: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 6.023516776s
Sep 17 01:29:10.651: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 8.028018958s
Sep 17 01:29:12.655: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 10.032644303s
Sep 17 01:29:14.661: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 12.038093012s
Sep 17 01:29:16.667: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 14.044033571s
Sep 17 01:29:18.673: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 16.050393029s
Sep 17 01:29:20.678: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 18.055529953s
Sep 17 01:29:22.683: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Running", Reason="", readiness=true. Elapsed: 20.060355954s
Sep 17 01:29:24.687: INFO: Pod "pod-subpath-test-secret-9vcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.06437786s
STEP: Saw pod success
Sep 17 01:29:24.687: INFO: Pod "pod-subpath-test-secret-9vcf" satisfied condition "Succeeded or Failed"
Sep 17 01:29:24.690: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-subpath-test-secret-9vcf container test-container-subpath-secret-9vcf: <nil>
STEP: delete the pod
Sep 17 01:29:24.723: INFO: Waiting for pod pod-subpath-test-secret-9vcf to disappear
Sep 17 01:29:24.726: INFO: Pod pod-subpath-test-secret-9vcf no longer exists
STEP: Deleting pod pod-subpath-test-secret-9vcf
Sep 17 01:29:24.726: INFO: Deleting pod "pod-subpath-test-secret-9vcf" in namespace "subpath-2131"
... skipping 7 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":346,"completed":335,"skipped":6359,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-instrumentation] Events API 
  should delete a collection of events [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-instrumentation] Events API
... skipping 12 lines ...
Sep 17 01:29:24.862: INFO: requesting DeleteCollection of events
STEP: check that the list of events matches the requested quantity
[AfterEach] [sig-instrumentation] Events API
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:29:24.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-8927" for this suite.
•{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":336,"skipped":6388,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
• [SLOW TEST:16.320 seconds]
[sig-api-machinery] ResourceQuota
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":346,"completed":337,"skipped":6399,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-node] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-node] Probing container
... skipping 27 lines ...
• [SLOW TEST:22.116 seconds]
[sig-node] Probing container
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":338,"skipped":6407,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SS
------------------------------
[sig-auth] ServiceAccounts 
  should run through the lifecycle of a ServiceAccount [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-auth] ServiceAccounts
... skipping 10 lines ...
STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
STEP: deleting the ServiceAccount
[AfterEach] [sig-auth] ServiceAccounts
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:30:03.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8026" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":346,"completed":339,"skipped":6409,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-cli] Kubectl client
... skipping 69 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294
    should create and stop a replication controller  [Conformance]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":346,"completed":340,"skipped":6428,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
STEP: Creating a kubernetes client
... skipping 20 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:30:13.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-104" for this suite.
STEP: Destroying namespace "webhook-104-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
•{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":341,"skipped":6428,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 01:30:13.772: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on tmpfs
Sep 17 01:30:14.013: INFO: Waiting up to 5m0s for pod "pod-8153dd80-ed57-4876-ba39-9af61074fa30" in namespace "emptydir-7842" to be "Succeeded or Failed"
Sep 17 01:30:14.020: INFO: Pod "pod-8153dd80-ed57-4876-ba39-9af61074fa30": Phase="Pending", Reason="", readiness=false. Elapsed: 7.258431ms
Sep 17 01:30:16.025: INFO: Pod "pod-8153dd80-ed57-4876-ba39-9af61074fa30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011988599s
STEP: Saw pod success
Sep 17 01:30:16.025: INFO: Pod "pod-8153dd80-ed57-4876-ba39-9af61074fa30" satisfied condition "Succeeded or Failed"
Sep 17 01:30:16.028: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-8153dd80-ed57-4876-ba39-9af61074fa30 container test-container: <nil>
STEP: delete the pod
Sep 17 01:30:16.052: INFO: Waiting for pod pod-8153dd80-ed57-4876-ba39-9af61074fa30 to disappear
Sep 17 01:30:16.058: INFO: Pod pod-8153dd80-ed57-4876-ba39-9af61074fa30 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:30:16.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7842" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":342,"skipped":6449,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-storage] EmptyDir volumes
... skipping 2 lines ...
Sep 17 01:30:16.069: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
STEP: Creating a pod to test emptydir 0777 on node default medium
Sep 17 01:30:16.155: INFO: Waiting up to 5m0s for pod "pod-e3ccdaab-813b-4139-90f6-155df87b84f0" in namespace "emptydir-1814" to be "Succeeded or Failed"
Sep 17 01:30:16.165: INFO: Pod "pod-e3ccdaab-813b-4139-90f6-155df87b84f0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.83291ms
Sep 17 01:30:18.169: INFO: Pod "pod-e3ccdaab-813b-4139-90f6-155df87b84f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013963292s
STEP: Saw pod success
Sep 17 01:30:18.169: INFO: Pod "pod-e3ccdaab-813b-4139-90f6-155df87b84f0" satisfied condition "Succeeded or Failed"
Sep 17 01:30:18.172: INFO: Trying to get logs from node kt2-280c76ac-1743-minion-group-xp78 pod pod-e3ccdaab-813b-4139-90f6-155df87b84f0 container test-container: <nil>
STEP: delete the pod
Sep 17 01:30:18.201: INFO: Waiting for pod pod-e3ccdaab-813b-4139-90f6-155df87b84f0 to disappear
Sep 17 01:30:18.205: INFO: Pod pod-e3ccdaab-813b-4139-90f6-155df87b84f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:30:18.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1814" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":343,"skipped":6463,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 6 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
Sep 17 01:30:18.270: INFO: >>> kubeConfig: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
Sep 17 01:30:20.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-1018" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":346,"completed":344,"skipped":6495,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}
SSSSSSSSSSSSep 17 01:30:20.212: INFO: Running AfterSuite actions on all nodes
Sep 17 01:30:20.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func18.2
Sep 17 01:30:20.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func8.2
Sep 17 01:30:20.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage.glob..func7.2
Sep 17 01:30:20.212: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func17.3
Sep 17 01:30:20.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func9.2
Sep 17 01:30:20.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func4.2
Sep 17 01:30:20.213: INFO: Running Cleanup Action: k8s.io/kubernetes/test/e2e/storage/vsphere.glob..func1.3
Sep 17 01:30:20.213: INFO: Running AfterSuite actions on node 1
Sep 17 01:30:20.213: INFO: Dumping logs locally to: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1
Sep 17 01:30:20.213: INFO: Error running cluster/log-dump/log-dump.sh: fork/exec ../../cluster/log-dump/log-dump.sh: no such file or directory

JUnit report was created: /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/junit_01.xml
{"msg":"Test Suite completed","total":346,"completed":344,"skipped":6506,"failed":2,"failures":["[sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","[sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]"]}


Summarizing 2 Failures:

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates that NodeSelector is respected if not matching  [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:436

[Fail] [sig-scheduling] SchedulerPredicates [Serial] [It] validates resource limits of pods that are allowed to run  [Conformance] 
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:323

Ran 346 of 6852 Specs in 6700.263 seconds
FAIL! -- 344 Passed | 2 Failed | 0 Pending | 6506 Skipped
--- FAIL: TestE2E (6702.31s)
FAIL

Ginkgo ran 1 suite in 1h51m42.403734357s
Test Suite Failed
F0917 01:30:20.264224   97228 ginkgo.go:205] failed to run ginkgo tester: exit status 1
I0917 01:30:20.268263    2918 down.go:29] GCE deployer starting Down()
I0917 01:30:20.268373    2918 common.go:204] checking locally built kubectl ...
I0917 01:30:20.268656    2918 down.go:43] About to run script at: /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh
I0917 01:30:20.268680    2918 local.go:42] ⚙️ /home/prow/go/src/k8s.io/kubernetes/cluster/kube-down.sh 
Bringing down cluster using provider: gce
... calling verify-prereqs
... skipping 38 lines ...
Property "users.k8s-infra-e2e-boskos-082_kt2-280c76ac-1743-basic-auth" unset.
Property "contexts.k8s-infra-e2e-boskos-082_kt2-280c76ac-1743" unset.
Cleared config for k8s-infra-e2e-boskos-082_kt2-280c76ac-1743 from /logs/artifacts/280c76ac-1743-11ec-a91f-4a1b528dc7f1/kubetest2-kubeconfig
Done
I0917 01:36:37.744229    2918 down.go:53] about to delete nodeport firewall rule
I0917 01:36:37.744283    2918 local.go:42] ⚙️ gcloud compute firewall-rules delete --project k8s-infra-e2e-boskos-082 kt2-280c76ac-1743-minion-nodeports
ERROR: (gcloud.compute.firewall-rules.delete) Could not fetch resource:
 - The resource 'projects/k8s-infra-e2e-boskos-082/global/firewalls/kt2-280c76ac-1743-minion-nodeports' was not found

W0917 01:36:38.767062    2918 firewall.go:62] failed to delete nodeports firewall rules: might be deleted already?
I0917 01:36:38.767092    2918 down.go:59] releasing boskos project
I0917 01:36:38.788663    2918 boskos.go:83] Boskos heartbeat func received signal to close
Error: exit status 255
+ EXIT_VALUE=1
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
f6a6ea3db8c9
... skipping 4 lines ...