Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h23m |
Revision | release-0.5 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capz\-e2e\sConformance\sTests\sconformance\-tests$'
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 Unexpected error: <*errors.withStack | 0xc000304168>: { error: <*errors.withMessage | 0xc0009b4480>{ cause: <*errors.errorString | 0xc00064a1a0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1c16425, 0x1cc8298, 0x7fa77d, 0x1cc8c23, 0x4e5787, 0x4e4c59, 0x7fead2, 0x7fc603, 0x7fc21c, 0x7fb967, 0x8024ef, 0x801b92, 0x811491, 0x810fa7, 0x810797, 0x812ea6, 0x820bd8, 0x820916, 0x1cae6ba, 0x529ce5, 0x474781], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:193
[BeforeEach] Conformance Tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:56 INFO: Cluster name is capz-conf-ewh6sx �[1mSTEP�[0m: Creating namespace "capz-conf-ewh6sx" for hosting the cluster Jan 24 17:54:04.918: INFO: starting to create namespace for hosting the "capz-conf-ewh6sx" test spec INFO: Creating namespace capz-conf-ewh6sx INFO: Creating event watcher for namespace "capz-conf-ewh6sx" [Measure] conformance-tests /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/conformance_test.go:100 INFO: Creating the workload cluster with name "capz-conf-ewh6sx" using the "(default)" template (Kubernetes v1.22.1, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster capz-conf-ewh6sx --infrastructure (default) --kubernetes-version v1.22.1 --control-plane-machine-count 1 --worker-machine-count 2 --flavor (default) INFO: Applying the cluster template yaml to the cluster INFO: Waiting for the cluster infrastructure to be provisioned �[1mSTEP�[0m: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by capz-conf-ewh6sx/capz-conf-ewh6sx-control-plane to be provisioned �[1mSTEP�[0m: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane capz-conf-ewh6sx/capz-conf-ewh6sx-control-plane to be ready (implies underlying nodes to be ready as well) �[1mSTEP�[0m: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned �[1mSTEP�[0m: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned �[1mSTEP�[0m: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e, command=["-nodes=1" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=2" "--kubeconfig=/tmp/kubeconfig" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true"] I0124 17:59:05.436896 14 e2e.go:129] Starting e2e run "fe47ba88-a861-437a-a415-d28627bc158f" on Ginkgo node 1 {"msg":"Test Suite starting","total":346,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1674583145�[0m - Will randomize all specs Will run �[1m346�[0m of �[1m6432�[0m specs Jan 24 17:59:07.445: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 17:59:07.448: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 24 17:59:07.911: INFO: Condition Ready of node capz-conf-ewh6sx-md-0-tb56s is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-01-24 17:58:43 +0000 UTC}]. Failure Jan 24 17:59:07.911: INFO: Condition Ready of node capz-conf-ewh6sx-md-0-xf5qq is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-01-24 17:58:53 +0000 UTC}]. Failure Jan 24 17:59:07.911: INFO: Unschedulable nodes= 2, maximum value for starting tests= 0 Jan 24 17:59:07.911: INFO: -> Node capz-conf-ewh6sx-md-0-tb56s [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-01-24 17:58:43 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]] Jan 24 17:59:07.911: INFO: -> Node capz-conf-ewh6sx-md-0-xf5qq [[[ Ready=false, Network(available)=false, Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>} {node.kubernetes.io/not-ready NoExecute 2023-01-24 17:58:53 +0000 UTC}], NonblockingTaints=node-role.kubernetes.io/master ]]] Jan 24 17:59:07.911: INFO: ==== node wait: 1 out of 3 nodes are ready, max notReady allowed 0. Need 2 more before starting. Jan 24 17:59:38.057: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 24 17:59:38.606: INFO: 13 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 24 17:59:38.606: INFO: expected 3 pod replicas in namespace 'kube-system', 3 are Running and Ready. Jan 24 17:59:38.606: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 24 17:59:38.722: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'calico-node' (0 seconds elapsed) Jan 24 17:59:38.722: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 24 17:59:38.722: INFO: e2e test version: v1.22.1 Jan 24 17:59:38.822: INFO: kube-apiserver version: v1.22.1 Jan 24 17:59:38.822: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 17:59:38.926: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould test the lifecycle of an Endpoint [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 17:59:38.927: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services Jan 24 17:59:39.342: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. W0124 17:59:39.342129 14 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating an Endpoint �[1mSTEP�[0m: waiting for available Endpoint �[1mSTEP�[0m: listing all Endpoints �[1mSTEP�[0m: updating the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: patching the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: deleting the Endpoint by Collection �[1mSTEP�[0m: waiting for Endpoint deletion �[1mSTEP�[0m: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 17:59:40.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-639" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":346,"completed":1,"skipped":41,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould retry creating failed daemon pods [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 17:59:41.095: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 [It] should retry creating failed daemon pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 24 17:59:42.313: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:42.419: INFO: Number of nodes with available pods: 0 Jan 24 17:59:42.419: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:43.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:43.636: INFO: Number of nodes with available pods: 0 Jan 24 17:59:43.636: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:44.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:44.636: INFO: Number of nodes with available pods: 0 Jan 24 17:59:44.636: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:45.532: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:45.637: INFO: Number of nodes with available pods: 0 Jan 24 17:59:45.637: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:46.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:46.637: INFO: Number of nodes with available pods: 0 Jan 24 17:59:46.638: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:47.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:47.636: INFO: Number of nodes with available pods: 0 Jan 24 17:59:47.636: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:48.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:48.636: INFO: Number of nodes with available pods: 0 Jan 24 17:59:48.637: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:49.530: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:49.636: INFO: Number of nodes with available pods: 0 Jan 24 17:59:49.636: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:50.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:50.637: INFO: Number of nodes with available pods: 0 Jan 24 17:59:50.637: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:51.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:51.637: INFO: Number of nodes with available pods: 0 Jan 24 17:59:51.637: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:52.532: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:52.638: INFO: Number of nodes with available pods: 0 Jan 24 17:59:52.638: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:53.532: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:53.637: INFO: Number of nodes with available pods: 0 Jan 24 17:59:53.637: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:54.532: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:54.639: INFO: Number of nodes with available pods: 1 Jan 24 17:59:54.639: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:55.531: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:55.636: INFO: Number of nodes with available pods: 1 Jan 24 17:59:55.636: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 17:59:56.532: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:56.637: INFO: Number of nodes with available pods: 2 Jan 24 17:59:56.637: INFO: Number of running nodes: 2, number of available pods: 2 �[1mSTEP�[0m: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 24 17:59:57.062: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:57.168: INFO: Number of nodes with available pods: 1 Jan 24 17:59:57.168: INFO: Node capz-conf-ewh6sx-md-0-xf5qq is running more than one daemon pod Jan 24 17:59:58.279: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 17:59:58.390: INFO: Number of nodes with available pods: 2 Jan 24 17:59:58.390: INFO: Number of running nodes: 2, number of available pods: 2 �[1mSTEP�[0m: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6818, will wait for the garbage collector to delete the pods Jan 24 17:59:58.950: INFO: Deleting DaemonSet.extensions daemon-set took: 104.302589ms Jan 24 17:59:59.051: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.150215ms Jan 24 18:00:01.554: INFO: Number of nodes with available pods: 0 Jan 24 18:00:01.554: INFO: Number of running nodes: 0, number of available pods: 0 Jan 24 18:00:01.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"1113"},"items":null} Jan 24 18:00:01.760: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1114"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:00:02.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-6818" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":346,"completed":2,"skipped":53,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[90mPreemptionExecutionPath�[0m �[1mruns ReplicaSets to verify preemption running path [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:00:02.300: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 24 18:00:03.116: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 18:01:04.068: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:01:04.169: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption-path �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:488 �[1mSTEP�[0m: Finding an available node �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. Jan 24 18:01:07.211: INFO: found a healthy node: capz-conf-ewh6sx-md-0-tb56s [It] runs ReplicaSets to verify preemption running path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:01:16.781: INFO: pods created so far: [1 1 1] Jan 24 18:01:16.781: INFO: length of pods created so far: 3 Jan 24 18:01:18.988: INFO: pods created so far: [2 2 1] [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:01:25.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-path-605" for this suite. [AfterEach] PreemptionExecutionPath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:462 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:01:26.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-1483" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":346,"completed":3,"skipped":76,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl cluster-info�[0m �[1mshould check if Kubernetes control plane services is included in cluster-info [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:01:27.475: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: validating cluster-info Jan 24 18:01:27.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8275 cluster-info' Jan 24 18:01:28.927: INFO: stderr: "" Jan 24 18:01:28.927: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:01:28.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8275" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":346,"completed":4,"skipped":92,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] ResourceQuota�[0m �[1mshould create a ResourceQuota and capture the life of a secret. [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:01:29.146: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Discovering how many secrets are in namespace by default �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Secret �[1mSTEP�[0m: Ensuring resource quota status captures secret creation �[1mSTEP�[0m: Deleting a secret �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:01:47.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-197" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":346,"completed":5,"skipped":99,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] InitContainer [NodeConformance]�[0m �[1mshould invoke init containers on a RestartNever pod [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:01:47.723: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod Jan 24 18:01:48.233: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:01:53.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-3728" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":346,"completed":6,"skipped":108,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir wrapper volumes�[0m �[1mshould not cause race condition when used for configmaps [Serial] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:01:53.731: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating 50 configmaps �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 24 18:01:59.965: INFO: Pod name wrapped-volume-race-a3fc51ff-c382-45cf-9606-19bfed5e45e6: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-a3fc51ff-c382-45cf-9606-19bfed5e45e6 in namespace emptydir-wrapper-7430, will wait for the garbage collector to delete the pods Jan 24 18:02:15.060: INFO: Deleting ReplicationController wrapped-volume-race-a3fc51ff-c382-45cf-9606-19bfed5e45e6 took: 112.287827ms Jan 24 18:02:15.161: INFO: Terminating ReplicationController wrapped-volume-race-a3fc51ff-c382-45cf-9606-19bfed5e45e6 pods took: 100.83921ms �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 24 18:02:19.552: INFO: Pod name wrapped-volume-race-fec575df-3d72-42fd-84f6-3a03ed712141: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-fec575df-3d72-42fd-84f6-3a03ed712141 in namespace emptydir-wrapper-7430, will wait for the garbage collector to delete the pods Jan 24 18:02:36.635: INFO: Deleting ReplicationController wrapped-volume-race-fec575df-3d72-42fd-84f6-3a03ed712141 took: 161.340612ms Jan 24 18:02:36.836: INFO: Terminating ReplicationController wrapped-volume-race-fec575df-3d72-42fd-84f6-3a03ed712141 pods took: 200.963796ms �[1mSTEP�[0m: Creating RC which spawns configmap-volume pods Jan 24 18:02:40.313: INFO: Pod name wrapped-volume-race-8938edca-0a3e-42a7-a1a5-49a75d8979bf: Found 5 pods out of 5 �[1mSTEP�[0m: Ensuring each pod is running �[1mSTEP�[0m: deleting ReplicationController wrapped-volume-race-8938edca-0a3e-42a7-a1a5-49a75d8979bf in namespace emptydir-wrapper-7430, will wait for the garbage collector to delete the pods Jan 24 18:03:07.387: INFO: Deleting ReplicationController wrapped-volume-race-8938edca-0a3e-42a7-a1a5-49a75d8979bf took: 160.809665ms Jan 24 18:03:07.488: INFO: Terminating ReplicationController wrapped-volume-race-8938edca-0a3e-42a7-a1a5-49a75d8979bf pods took: 100.571015ms �[1mSTEP�[0m: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:03:16.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-7430" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":346,"completed":7,"skipped":112,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mlisting validating webhooks should work [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:03:16.260: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:03:17.987: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:03:20.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:03:22.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:03:24.102: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180197, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:03:27.203: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:03:29.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5381" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5381-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":346,"completed":8,"skipped":115,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected downwardAPI�[0m �[1mshould provide container's cpu request [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:03:29.872: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:03:30.497: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0cbe96e2-1cbc-478b-b302-fb99e1952267" in namespace "projected-8946" to be "Succeeded or Failed" Jan 24 18:03:30.600: INFO: Pod "downwardapi-volume-0cbe96e2-1cbc-478b-b302-fb99e1952267": Phase="Pending", Reason="", readiness=false. Elapsed: 103.308773ms Jan 24 18:03:32.704: INFO: Pod "downwardapi-volume-0cbe96e2-1cbc-478b-b302-fb99e1952267": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207121543s �[1mSTEP�[0m: Saw pod success Jan 24 18:03:32.704: INFO: Pod "downwardapi-volume-0cbe96e2-1cbc-478b-b302-fb99e1952267" satisfied condition "Succeeded or Failed" Jan 24 18:03:32.808: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downwardapi-volume-0cbe96e2-1cbc-478b-b302-fb99e1952267 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:03:33.035: INFO: Waiting for pod downwardapi-volume-0cbe96e2-1cbc-478b-b302-fb99e1952267 to disappear Jan 24 18:03:33.138: INFO: Pod downwardapi-volume-0cbe96e2-1cbc-478b-b302-fb99e1952267 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:03:33.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8946" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":346,"completed":9,"skipped":122,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mlisting mutating webhooks should work [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:03:33.359: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:03:35.051: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180214, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180214, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180214, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180214, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:03:38.269: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:03:40.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9673" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9673-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":346,"completed":10,"skipped":127,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Kubelet�[0m �[90mwhen scheduling a busybox command that always fails in a pod�[0m �[1mshould be possible to delete [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:03:41.097: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:03:41.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-4375" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":346,"completed":11,"skipped":153,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:03:42.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:03:42.696: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae" in namespace "downward-api-5494" to be "Succeeded or Failed" Jan 24 18:03:42.799: INFO: Pod "downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae": Phase="Pending", Reason="", readiness=false. Elapsed: 102.911743ms Jan 24 18:03:44.904: INFO: Pod "downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207399977s Jan 24 18:03:47.009: INFO: Pod "downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312633974s Jan 24 18:03:49.114: INFO: Pod "downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.417880714s Jan 24 18:03:51.218: INFO: Pod "downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.522088081s �[1mSTEP�[0m: Saw pod success Jan 24 18:03:51.218: INFO: Pod "downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae" satisfied condition "Succeeded or Failed" Jan 24 18:03:51.322: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:03:51.553: INFO: Waiting for pod downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae to disappear Jan 24 18:03:51.655: INFO: Pod downwardapi-volume-9dbfaf1c-34e6-49c7-8096-00a261df3dae no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:03:51.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5494" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":12,"skipped":197,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Container Lifecycle Hook�[0m �[90mwhen create a pod with lifecycle hook�[0m �[1mshould execute poststart exec hook properly [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:03:51.870: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 24 18:03:52.598: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:03:54.703: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 24 18:03:55.016: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:03:57.120: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 24 18:03:57.437: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 18:03:57.542: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 18:03:59.543: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 18:03:59.647: INFO: Pod pod-with-poststart-exec-hook still exists Jan 24 18:04:01.543: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 24 18:04:01.647: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:04:01.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-3532" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":346,"completed":13,"skipped":201,"failed":0} �[90m------------------------------�[0m �[0m[sig-node] Kubelet�[0m �[90mwhen scheduling a busybox Pod with hostAliases�[0m �[1mshould write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:04:01.862: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:04:02.596: INFO: The status of Pod busybox-host-aliases6fb57fa5-86d9-442a-9a73-b564375e308d is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:04:04.700: INFO: The status of Pod busybox-host-aliases6fb57fa5-86d9-442a-9a73-b564375e308d is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:04:04.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-4119" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":14,"skipped":201,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Watchers�[0m �[1mshould observe add, update, and delete watch notifications on configmaps [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:04:05.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a watch on configmaps with label A �[1mSTEP�[0m: creating a watch on configmaps with label B �[1mSTEP�[0m: creating a watch on configmaps with label A or B �[1mSTEP�[0m: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 24 18:04:06.061: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3091 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 18:04:06.061: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3091 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:06 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A and ensuring the correct watchers observe the notification Jan 24 18:04:16.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3138 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 18:04:16.270: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3138 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A again and ensuring the correct watchers observe the notification Jan 24 18:04:26.479: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3153 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 18:04:26.480: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3153 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap A and ensuring the correct watchers observe the notification Jan 24 18:04:36.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3168 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 18:04:36.588: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-902 04f449bf-b66e-415b-8680-5825f5c59ee3 3168 0 2023-01-24 18:04:06 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:16 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 24 18:04:46.696: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-902 d2c1f78d-44d6-425e-970d-0af2a49ae8c4 3190 0 2023-01-24 18:04:46 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 18:04:46.696: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-902 d2c1f78d-44d6-425e-970d-0af2a49ae8c4 3190 0 2023-01-24 18:04:46 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap B and ensuring the correct watchers observe the notification Jan 24 18:04:56.802: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-902 d2c1f78d-44d6-425e-970d-0af2a49ae8c4 3207 0 2023-01-24 18:04:46 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 18:04:56.802: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-902 d2c1f78d-44d6-425e-970d-0af2a49ae8c4 3207 0 2023-01-24 18:04:46 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-24 18:04:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:06.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-902" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":346,"completed":15,"skipped":214,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould be updated [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:07.021: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 24 18:05:07.747: INFO: The status of Pod pod-update-1e7a2d82-b7f7-4fd8-8345-bf2c350708a8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:05:09.852: INFO: The status of Pod pod-update-1e7a2d82-b7f7-4fd8-8345-bf2c350708a8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:05:11.852: INFO: The status of Pod pod-update-1e7a2d82-b7f7-4fd8-8345-bf2c350708a8 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:05:13.853: INFO: The status of Pod pod-update-1e7a2d82-b7f7-4fd8-8345-bf2c350708a8 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 24 18:05:14.768: INFO: Successfully updated pod "pod-update-1e7a2d82-b7f7-4fd8-8345-bf2c350708a8" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Jan 24 18:05:14.976: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:14.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":346,"completed":16,"skipped":226,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Proxy�[0m �[90mversion v1�[0m �[1mshould proxy through a service and a pod [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:15.190: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: starting an echo server on multiple ports �[1mSTEP�[0m: creating replication controller proxy-service-n9sfx in namespace proxy-4417 I0124 18:05:15.925500 14 runners.go:190] Created replication controller with name: proxy-service-n9sfx, namespace: proxy-4417, replica count: 1 I0124 18:05:17.076276 14 runners.go:190] proxy-service-n9sfx Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0124 18:05:18.076517 14 runners.go:190] proxy-service-n9sfx Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 18:05:18.191: INFO: setup took 2.483671264s, starting test cases �[1mSTEP�[0m: running 16 cases, 20 attempts per case, 320 total attempts Jan 24 18:05:18.299: INFO: (0) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 107.338186ms) Jan 24 18:05:18.303: INFO: (0) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 112.257519ms) Jan 24 18:05:18.304: INFO: (0) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 112.310085ms) Jan 24 18:05:18.308: INFO: (0) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 116.441624ms) Jan 24 18:05:18.308: INFO: (0) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 116.641523ms) Jan 24 18:05:18.308: INFO: (0) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 116.445556ms) Jan 24 18:05:18.308: INFO: (0) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 116.921375ms) Jan 24 18:05:18.308: INFO: (0) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 116.691096ms) Jan 24 18:05:18.308: INFO: (0) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 116.894483ms) Jan 24 18:05:18.308: INFO: (0) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 117.09447ms) Jan 24 18:05:18.397: INFO: (0) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 205.706396ms) Jan 24 18:05:18.397: INFO: (0) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 205.923675ms) Jan 24 18:05:18.398: INFO: (0) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 206.601818ms) Jan 24 18:05:18.398: INFO: (0) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 206.594052ms) Jan 24 18:05:18.398: INFO: (0) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 206.786943ms) Jan 24 18:05:18.399: INFO: (0) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 207.700168ms) Jan 24 18:05:18.510: INFO: (1) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 109.699575ms) Jan 24 18:05:18.510: INFO: (1) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 109.566029ms) Jan 24 18:05:18.510: INFO: (1) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 110.395676ms) Jan 24 18:05:18.511: INFO: (1) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 110.763367ms) Jan 24 18:05:18.511: INFO: (1) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 111.884843ms) Jan 24 18:05:18.511: INFO: (1) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 111.272139ms) Jan 24 18:05:18.512: INFO: (1) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 111.130631ms) Jan 24 18:05:18.514: INFO: (1) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 113.267676ms) Jan 24 18:05:18.514: INFO: (1) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 113.60235ms) Jan 24 18:05:18.514: INFO: (1) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 113.886631ms) Jan 24 18:05:18.514: INFO: (1) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 114.200713ms) Jan 24 18:05:18.516: INFO: (1) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 115.881942ms) Jan 24 18:05:18.516: INFO: (1) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 117.114366ms) Jan 24 18:05:18.516: INFO: (1) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 117.030067ms) Jan 24 18:05:18.516: INFO: (1) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 115.701119ms) Jan 24 18:05:18.516: INFO: (1) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 114.962838ms) Jan 24 18:05:18.626: INFO: (2) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 109.318057ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 111.284037ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 110.40895ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 110.98831ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 111.120066ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 110.402147ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 110.632651ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 111.144453ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 110.281329ms) Jan 24 18:05:18.628: INFO: (2) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 110.657507ms) Jan 24 18:05:18.632: INFO: (2) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 114.5713ms) Jan 24 18:05:18.632: INFO: (2) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 114.33416ms) Jan 24 18:05:18.632: INFO: (2) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 114.447203ms) Jan 24 18:05:18.632: INFO: (2) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 114.643545ms) Jan 24 18:05:18.632: INFO: (2) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 114.606486ms) Jan 24 18:05:18.632: INFO: (2) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 114.56969ms) Jan 24 18:05:18.742: INFO: (3) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 109.509554ms) Jan 24 18:05:18.742: INFO: (3) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 109.570324ms) Jan 24 18:05:18.742: INFO: (3) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 109.864879ms) Jan 24 18:05:18.743: INFO: (3) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 109.71227ms) Jan 24 18:05:18.743: INFO: (3) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 110.259468ms) Jan 24 18:05:18.743: INFO: (3) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 109.56435ms) Jan 24 18:05:18.743: INFO: (3) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 109.968811ms) Jan 24 18:05:18.744: INFO: (3) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 111.41603ms) Jan 24 18:05:18.745: INFO: (3) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 111.877729ms) Jan 24 18:05:18.745: INFO: (3) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 111.963775ms) Jan 24 18:05:18.745: INFO: (3) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 112.11583ms) Jan 24 18:05:18.746: INFO: (3) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 113.11826ms) Jan 24 18:05:18.746: INFO: (3) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 113.477155ms) Jan 24 18:05:18.747: INFO: (3) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 113.971007ms) Jan 24 18:05:18.747: INFO: (3) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 113.601ms) Jan 24 18:05:18.747: INFO: (3) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 113.835729ms) Jan 24 18:05:18.855: INFO: (4) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 107.484048ms) Jan 24 18:05:18.857: INFO: (4) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 108.891654ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 115.52619ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 115.443039ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 115.513124ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 115.930938ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 115.611465ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 115.527666ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 115.972232ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 115.510751ms) Jan 24 18:05:18.863: INFO: (4) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 115.61204ms) Jan 24 18:05:18.865: INFO: (4) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 116.812612ms) Jan 24 18:05:18.865: INFO: (4) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 116.771746ms) Jan 24 18:05:18.866: INFO: (4) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 118.381769ms) Jan 24 18:05:18.866: INFO: (4) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 118.67788ms) Jan 24 18:05:18.867: INFO: (4) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 119.122854ms) Jan 24 18:05:18.988: INFO: (5) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 119.28663ms) Jan 24 18:05:18.988: INFO: (5) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 119.894316ms) Jan 24 18:05:18.988: INFO: (5) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 119.781711ms) Jan 24 18:05:18.988: INFO: (5) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 119.659224ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 121.319262ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 121.170995ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 121.241871ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 121.641898ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 122.194796ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 121.461185ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 121.791336ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 121.678325ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 121.481864ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 122.248626ms) Jan 24 18:05:18.990: INFO: (5) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 122.042823ms) Jan 24 18:05:18.991: INFO: (5) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 123.03097ms) Jan 24 18:05:19.125: INFO: (6) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 134.16511ms) Jan 24 18:05:19.125: INFO: (6) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 133.431918ms) Jan 24 18:05:19.125: INFO: (6) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 134.00462ms) Jan 24 18:05:19.125: INFO: (6) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 133.647624ms) Jan 24 18:05:19.125: INFO: (6) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 133.921086ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 134.102797ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 133.932052ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 134.438091ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 135.091865ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 134.629719ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 134.572269ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 134.826926ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 133.95656ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 134.550262ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 134.815049ms) Jan 24 18:05:19.126: INFO: (6) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 134.596579ms) Jan 24 18:05:19.236: INFO: (7) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 109.412035ms) Jan 24 18:05:19.237: INFO: (7) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 110.057111ms) Jan 24 18:05:19.241: INFO: (7) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 113.238067ms) Jan 24 18:05:19.241: INFO: (7) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 113.42305ms) Jan 24 18:05:19.241: INFO: (7) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 113.442384ms) Jan 24 18:05:19.242: INFO: (7) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 114.403903ms) Jan 24 18:05:19.242: INFO: (7) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 114.869439ms) Jan 24 18:05:19.242: INFO: (7) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 114.817799ms) Jan 24 18:05:19.243: INFO: (7) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 115.627637ms) Jan 24 18:05:19.244: INFO: (7) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 115.897305ms) Jan 24 18:05:19.244: INFO: (7) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 116.061762ms) Jan 24 18:05:19.244: INFO: (7) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 116.183019ms) Jan 24 18:05:19.244: INFO: (7) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 116.301594ms) Jan 24 18:05:19.244: INFO: (7) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 116.307103ms) Jan 24 18:05:19.244: INFO: (7) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 116.180596ms) Jan 24 18:05:19.244: INFO: (7) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 116.162075ms) Jan 24 18:05:19.355: INFO: (8) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 110.194519ms) Jan 24 18:05:19.355: INFO: (8) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 110.110568ms) Jan 24 18:05:19.356: INFO: (8) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 110.751127ms) Jan 24 18:05:19.356: INFO: (8) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 111.137442ms) Jan 24 18:05:19.356: INFO: (8) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 111.20584ms) Jan 24 18:05:19.356: INFO: (8) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 111.45134ms) Jan 24 18:05:19.356: INFO: (8) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 111.563197ms) Jan 24 18:05:19.356: INFO: (8) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 111.480902ms) Jan 24 18:05:19.357: INFO: (8) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 112.257109ms) Jan 24 18:05:19.357: INFO: (8) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 112.724882ms) Jan 24 18:05:19.357: INFO: (8) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 112.291932ms) Jan 24 18:05:19.357: INFO: (8) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 112.338168ms) Jan 24 18:05:19.359: INFO: (8) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 114.23199ms) Jan 24 18:05:19.359: INFO: (8) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 114.258059ms) Jan 24 18:05:19.359: INFO: (8) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 114.466917ms) Jan 24 18:05:19.359: INFO: (8) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 114.666579ms) Jan 24 18:05:19.467: INFO: (9) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 107.012909ms) Jan 24 18:05:19.471: INFO: (9) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 111.270435ms) Jan 24 18:05:19.471: INFO: (9) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 111.258409ms) Jan 24 18:05:19.471: INFO: (9) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 111.425965ms) Jan 24 18:05:19.472: INFO: (9) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 111.611407ms) Jan 24 18:05:19.472: INFO: (9) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 111.578922ms) Jan 24 18:05:19.472: INFO: (9) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 111.515641ms) Jan 24 18:05:19.472: INFO: (9) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 112.249277ms) Jan 24 18:05:19.472: INFO: (9) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 112.29734ms) Jan 24 18:05:19.473: INFO: (9) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 112.618345ms) Jan 24 18:05:19.475: INFO: (9) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 114.791437ms) Jan 24 18:05:19.475: INFO: (9) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 115.091152ms) Jan 24 18:05:19.475: INFO: (9) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 115.145026ms) Jan 24 18:05:19.475: INFO: (9) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 115.194824ms) Jan 24 18:05:19.475: INFO: (9) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 115.325245ms) Jan 24 18:05:19.475: INFO: (9) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 115.347601ms) Jan 24 18:05:19.587: INFO: (10) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 111.346041ms) Jan 24 18:05:19.587: INFO: (10) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 111.281242ms) Jan 24 18:05:19.588: INFO: (10) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 111.600177ms) Jan 24 18:05:19.589: INFO: (10) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 112.723878ms) Jan 24 18:05:19.589: INFO: (10) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 112.828247ms) Jan 24 18:05:19.589: INFO: (10) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 113.076078ms) Jan 24 18:05:19.589: INFO: (10) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 112.913357ms) Jan 24 18:05:19.590: INFO: (10) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 113.828247ms) Jan 24 18:05:19.591: INFO: (10) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 115.376788ms) Jan 24 18:05:19.591: INFO: (10) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 115.294268ms) Jan 24 18:05:19.591: INFO: (10) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 115.340811ms) Jan 24 18:05:19.592: INFO: (10) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 115.669975ms) Jan 24 18:05:19.592: INFO: (10) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 115.883527ms) Jan 24 18:05:19.592: INFO: (10) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 115.567196ms) Jan 24 18:05:19.592: INFO: (10) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 115.582412ms) Jan 24 18:05:19.592: INFO: (10) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 115.904127ms) Jan 24 18:05:19.699: INFO: (11) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 107.216024ms) Jan 24 18:05:19.699: INFO: (11) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 107.12592ms) Jan 24 18:05:19.701: INFO: (11) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 109.093262ms) Jan 24 18:05:19.702: INFO: (11) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 109.340681ms) Jan 24 18:05:19.706: INFO: (11) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 114.286517ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 114.470127ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 114.59268ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 114.745336ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 114.755041ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 115.020106ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 115.21116ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 115.007766ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 115.282786ms) Jan 24 18:05:19.707: INFO: (11) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 115.122355ms) Jan 24 18:05:19.708: INFO: (11) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 115.53048ms) Jan 24 18:05:19.708: INFO: (11) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 115.567127ms) Jan 24 18:05:19.832: INFO: (12) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 123.606415ms) Jan 24 18:05:19.833: INFO: (12) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 123.488564ms) Jan 24 18:05:19.834: INFO: (12) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 125.509505ms) Jan 24 18:05:19.834: INFO: (12) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 124.353428ms) Jan 24 18:05:19.834: INFO: (12) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 124.183718ms) Jan 24 18:05:19.834: INFO: (12) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 124.322272ms) Jan 24 18:05:19.835: INFO: (12) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 126.114081ms) Jan 24 18:05:19.835: INFO: (12) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 125.819685ms) Jan 24 18:05:19.835: INFO: (12) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 126.078592ms) Jan 24 18:05:19.836: INFO: (12) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 126.909347ms) Jan 24 18:05:19.836: INFO: (12) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 127.572702ms) Jan 24 18:05:19.837: INFO: (12) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 127.684847ms) Jan 24 18:05:19.838: INFO: (12) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 128.226082ms) Jan 24 18:05:19.838: INFO: (12) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 128.257835ms) Jan 24 18:05:19.839: INFO: (12) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 129.217122ms) Jan 24 18:05:19.839: INFO: (12) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 129.699595ms) Jan 24 18:05:19.946: INFO: (13) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 106.347686ms) Jan 24 18:05:19.949: INFO: (13) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 110.005162ms) Jan 24 18:05:19.950: INFO: (13) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 109.780301ms) Jan 24 18:05:19.950: INFO: (13) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 109.223976ms) Jan 24 18:05:19.950: INFO: (13) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 110.852568ms) Jan 24 18:05:19.951: INFO: (13) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 111.010674ms) Jan 24 18:05:19.953: INFO: (13) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 113.616552ms) Jan 24 18:05:19.953: INFO: (13) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 113.781007ms) Jan 24 18:05:19.953: INFO: (13) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 113.618919ms) Jan 24 18:05:19.953: INFO: (13) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 113.518902ms) Jan 24 18:05:19.954: INFO: (13) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 113.777014ms) Jan 24 18:05:19.954: INFO: (13) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 114.563953ms) Jan 24 18:05:19.955: INFO: (13) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 114.615116ms) Jan 24 18:05:19.955: INFO: (13) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 115.172749ms) Jan 24 18:05:19.956: INFO: (13) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 115.955545ms) Jan 24 18:05:19.956: INFO: (13) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 115.993774ms) Jan 24 18:05:20.063: INFO: (14) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 105.600739ms) Jan 24 18:05:20.063: INFO: (14) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 106.722151ms) Jan 24 18:05:20.063: INFO: (14) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 106.134021ms) Jan 24 18:05:20.068: INFO: (14) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 110.507423ms) Jan 24 18:05:20.068: INFO: (14) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 110.299036ms) Jan 24 18:05:20.068: INFO: (14) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 110.299592ms) Jan 24 18:05:20.068: INFO: (14) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 110.498203ms) Jan 24 18:05:20.068: INFO: (14) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 110.445888ms) Jan 24 18:05:20.068: INFO: (14) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 111.50484ms) Jan 24 18:05:20.070: INFO: (14) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 112.922316ms) Jan 24 18:05:20.070: INFO: (14) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 113.052012ms) Jan 24 18:05:20.070: INFO: (14) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 113.427893ms) Jan 24 18:05:20.070: INFO: (14) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 113.156334ms) Jan 24 18:05:20.070: INFO: (14) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 113.055509ms) Jan 24 18:05:20.071: INFO: (14) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 113.061169ms) Jan 24 18:05:20.070: INFO: (14) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 112.977311ms) Jan 24 18:05:20.186: INFO: (15) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 114.418745ms) Jan 24 18:05:20.186: INFO: (15) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 114.383289ms) Jan 24 18:05:20.186: INFO: (15) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 114.740733ms) Jan 24 18:05:20.186: INFO: (15) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 114.968814ms) Jan 24 18:05:20.186: INFO: (15) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 114.833982ms) Jan 24 18:05:20.186: INFO: (15) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 115.241874ms) Jan 24 18:05:20.186: INFO: (15) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 114.808068ms) Jan 24 18:05:20.187: INFO: (15) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 115.388944ms) Jan 24 18:05:20.187: INFO: (15) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 115.694566ms) Jan 24 18:05:20.189: INFO: (15) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 118.291892ms) Jan 24 18:05:20.191: INFO: (15) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 119.361ms) Jan 24 18:05:20.191: INFO: (15) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 119.767023ms) Jan 24 18:05:20.191: INFO: (15) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 119.68665ms) Jan 24 18:05:20.191: INFO: (15) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 119.488216ms) Jan 24 18:05:20.191: INFO: (15) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 119.676151ms) Jan 24 18:05:20.191: INFO: (15) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 119.863188ms) Jan 24 18:05:20.301: INFO: (16) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 109.600188ms) Jan 24 18:05:20.311: INFO: (16) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 119.098567ms) Jan 24 18:05:20.314: INFO: (16) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 122.701272ms) Jan 24 18:05:20.314: INFO: (16) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 122.557417ms) Jan 24 18:05:20.314: INFO: (16) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 121.555662ms) Jan 24 18:05:20.314: INFO: (16) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 122.134008ms) Jan 24 18:05:20.314: INFO: (16) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 121.988823ms) Jan 24 18:05:20.314: INFO: (16) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 122.443018ms) Jan 24 18:05:20.314: INFO: (16) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 122.469094ms) Jan 24 18:05:20.315: INFO: (16) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 122.746033ms) Jan 24 18:05:20.316: INFO: (16) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 124.372617ms) Jan 24 18:05:20.317: INFO: (16) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 124.05785ms) Jan 24 18:05:20.317: INFO: (16) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 124.656371ms) Jan 24 18:05:20.317: INFO: (16) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 125.498379ms) Jan 24 18:05:20.317: INFO: (16) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 124.642825ms) Jan 24 18:05:20.317: INFO: (16) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 124.868618ms) Jan 24 18:05:20.424: INFO: (17) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 106.490725ms) Jan 24 18:05:20.424: INFO: (17) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 106.502073ms) Jan 24 18:05:20.428: INFO: (17) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 110.310878ms) Jan 24 18:05:20.428: INFO: (17) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 110.183136ms) Jan 24 18:05:20.428: INFO: (17) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 110.482079ms) Jan 24 18:05:20.429: INFO: (17) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 111.141206ms) Jan 24 18:05:20.431: INFO: (17) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 112.716372ms) Jan 24 18:05:20.431: INFO: (17) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 113.559065ms) Jan 24 18:05:20.432: INFO: (17) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 113.520999ms) Jan 24 18:05:20.432: INFO: (17) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 113.914612ms) Jan 24 18:05:20.433: INFO: (17) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 115.193955ms) Jan 24 18:05:20.433: INFO: (17) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 115.186428ms) Jan 24 18:05:20.433: INFO: (17) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 115.490737ms) Jan 24 18:05:20.434: INFO: (17) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 116.561769ms) Jan 24 18:05:20.434: INFO: (17) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 116.485145ms) Jan 24 18:05:20.434: INFO: (17) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 116.434596ms) Jan 24 18:05:20.545: INFO: (18) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 109.590129ms) Jan 24 18:05:20.552: INFO: (18) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 115.367488ms) Jan 24 18:05:20.552: INFO: (18) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 115.602976ms) Jan 24 18:05:20.552: INFO: (18) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 116.521045ms) Jan 24 18:05:20.552: INFO: (18) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 116.009431ms) Jan 24 18:05:20.552: INFO: (18) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 115.922413ms) Jan 24 18:05:20.553: INFO: (18) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 117.130563ms) Jan 24 18:05:20.556: INFO: (18) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 119.816518ms) Jan 24 18:05:20.556: INFO: (18) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 120.288818ms) Jan 24 18:05:20.557: INFO: (18) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 120.399638ms) Jan 24 18:05:20.557: INFO: (18) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 120.686261ms) Jan 24 18:05:20.558: INFO: (18) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 122.155869ms) Jan 24 18:05:20.558: INFO: (18) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 121.970289ms) Jan 24 18:05:20.558: INFO: (18) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 122.076093ms) Jan 24 18:05:20.559: INFO: (18) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 122.393718ms) Jan 24 18:05:20.559: INFO: (18) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 122.575386ms) Jan 24 18:05:20.670: INFO: (19) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:162/proxy/: bar (200; 109.979709ms) Jan 24 18:05:20.670: INFO: (19) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:160/proxy/: foo (200; 110.722314ms) Jan 24 18:05:20.671: INFO: (19) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:462/proxy/: tls qux (200; 110.529377ms) Jan 24 18:05:20.671: INFO: (19) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td:1080/proxy/rewriteme">test<... (200; 110.519119ms) Jan 24 18:05:20.671: INFO: (19) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:1080/proxy/rewriteme">... (200; 111.322305ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/proxy-service-n9sfx-878td/proxy/rewriteme">test</a> (200; 113.020807ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:160/proxy/: foo (200; 113.056976ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname2/proxy/: tls qux (200; 113.161054ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:460/proxy/: tls baz (200; 113.301452ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname1/proxy/: foo (200; 113.095173ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/pods/http:proxy-service-n9sfx-878td:162/proxy/: bar (200; 113.900928ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/: <a href="/api/v1/namespaces/proxy-4417/pods/https:proxy-service-n9sfx-878td:443/proxy/tlsrewritem... (200; 113.460595ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname1/proxy/: foo (200; 113.466309ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/services/proxy-service-n9sfx:portname2/proxy/: bar (200; 113.662259ms) Jan 24 18:05:20.673: INFO: (19) /api/v1/namespaces/proxy-4417/services/https:proxy-service-n9sfx:tlsportname1/proxy/: tls baz (200; 113.581839ms) Jan 24 18:05:20.674: INFO: (19) /api/v1/namespaces/proxy-4417/services/http:proxy-service-n9sfx:portname2/proxy/: bar (200; 114.082675ms) �[1mSTEP�[0m: deleting ReplicationController proxy-service-n9sfx in namespace proxy-4417, will wait for the garbage collector to delete the pods Jan 24 18:05:21.038: INFO: Deleting ReplicationController proxy-service-n9sfx took: 108.808667ms Jan 24 18:05:21.138: INFO: Terminating ReplicationController proxy-service-n9sfx pods took: 100.697693ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:22.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-4417" for this suite. �[32m•�[0m{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":346,"completed":17,"skipped":239,"failed":0} �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould delete RS created by deployment when not orphaning [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:22.356: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 24 18:05:23.973: INFO: The status of Pod kube-controller-manager-capz-conf-ewh6sx-control-plane-pt2q9 is Running (Ready = true) Jan 24 18:05:25.038: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:25.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-8005" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":346,"completed":18,"skipped":239,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl api-versions�[0m �[1mshould check if v1 is in available api versions [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:25.259: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: validating api versions Jan 24 18:05:25.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9599 api-versions' Jan 24 18:05:26.358: INFO: stderr: "" Jan 24 18:05:26.358: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ncrd.projectcalico.org/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:26.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9599" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":346,"completed":19,"skipped":253,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould find a service from listing all namespaces [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:26.572: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:27.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5816" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":346,"completed":20,"skipped":273,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected downwardAPI�[0m �[1mshould provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:27.418: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:05:28.050: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1dbfec59-449a-47fd-a81a-f794452a76f0" in namespace "projected-8256" to be "Succeeded or Failed" Jan 24 18:05:28.155: INFO: Pod "downwardapi-volume-1dbfec59-449a-47fd-a81a-f794452a76f0": Phase="Pending", Reason="", readiness=false. Elapsed: 104.597445ms Jan 24 18:05:30.259: INFO: Pod "downwardapi-volume-1dbfec59-449a-47fd-a81a-f794452a76f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20873194s �[1mSTEP�[0m: Saw pod success Jan 24 18:05:30.259: INFO: Pod "downwardapi-volume-1dbfec59-449a-47fd-a81a-f794452a76f0" satisfied condition "Succeeded or Failed" Jan 24 18:05:30.369: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod downwardapi-volume-1dbfec59-449a-47fd-a81a-f794452a76f0 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:05:30.596: INFO: Waiting for pod downwardapi-volume-1dbfec59-449a-47fd-a81a-f794452a76f0 to disappear Jan 24 18:05:30.707: INFO: Pod downwardapi-volume-1dbfec59-449a-47fd-a81a-f794452a76f0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:30.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8256" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":21,"skipped":298,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:30.929: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Jan 24 18:05:31.553: INFO: Waiting up to 5m0s for pod "pod-6654a4cf-301f-4f87-9d9a-85b8994a877a" in namespace "emptydir-3639" to be "Succeeded or Failed" Jan 24 18:05:31.672: INFO: Pod "pod-6654a4cf-301f-4f87-9d9a-85b8994a877a": Phase="Pending", Reason="", readiness=false. Elapsed: 119.079426ms Jan 24 18:05:33.781: INFO: Pod "pod-6654a4cf-301f-4f87-9d9a-85b8994a877a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.227574988s �[1mSTEP�[0m: Saw pod success Jan 24 18:05:33.781: INFO: Pod "pod-6654a4cf-301f-4f87-9d9a-85b8994a877a" satisfied condition "Succeeded or Failed" Jan 24 18:05:33.885: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-6654a4cf-301f-4f87-9d9a-85b8994a877a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:05:34.108: INFO: Waiting for pod pod-6654a4cf-301f-4f87-9d9a-85b8994a877a to disappear Jan 24 18:05:34.211: INFO: Pod pod-6654a4cf-301f-4f87-9d9a-85b8994a877a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:34.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3639" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":22,"skipped":301,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Subpath�[0m �[90mAtomic writer volumes�[0m �[1mshould support subpaths with projected pod [LinuxOnly] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:34.431: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-58wh �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 24 18:05:35.268: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-58wh" in namespace "subpath-5851" to be "Succeeded or Failed" Jan 24 18:05:35.372: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Pending", Reason="", readiness=false. Elapsed: 103.433739ms Jan 24 18:05:37.477: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 2.208943022s Jan 24 18:05:39.582: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 4.313666835s Jan 24 18:05:41.687: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 6.418855581s Jan 24 18:05:43.793: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 8.524866221s Jan 24 18:05:45.899: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 10.630761159s Jan 24 18:05:48.004: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 12.735884741s Jan 24 18:05:50.111: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 14.84246111s Jan 24 18:05:52.217: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 16.948566181s Jan 24 18:05:54.323: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 19.054133205s Jan 24 18:05:56.427: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Running", Reason="", readiness=true. Elapsed: 21.159091152s Jan 24 18:05:58.532: INFO: Pod "pod-subpath-test-projected-58wh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.263770649s �[1mSTEP�[0m: Saw pod success Jan 24 18:05:58.532: INFO: Pod "pod-subpath-test-projected-58wh" satisfied condition "Succeeded or Failed" Jan 24 18:05:58.637: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-subpath-test-projected-58wh container test-container-subpath-projected-58wh: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:05:58.864: INFO: Waiting for pod pod-subpath-test-projected-58wh to disappear Jan 24 18:05:58.967: INFO: Pod pod-subpath-test-projected-58wh no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-58wh Jan 24 18:05:58.968: INFO: Deleting pod "pod-subpath-test-projected-58wh" in namespace "subpath-5851" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:05:59.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-5851" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":346,"completed":23,"skipped":302,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:05:59.286: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 24 18:05:59.912: INFO: Waiting up to 5m0s for pod "pod-55f5d0a2-1f47-4fa6-bfc2-15e624b94b5a" in namespace "emptydir-5982" to be "Succeeded or Failed" Jan 24 18:06:00.015: INFO: Pod "pod-55f5d0a2-1f47-4fa6-bfc2-15e624b94b5a": Phase="Pending", Reason="", readiness=false. Elapsed: 102.989542ms Jan 24 18:06:02.121: INFO: Pod "pod-55f5d0a2-1f47-4fa6-bfc2-15e624b94b5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208798614s �[1mSTEP�[0m: Saw pod success Jan 24 18:06:02.121: INFO: Pod "pod-55f5d0a2-1f47-4fa6-bfc2-15e624b94b5a" satisfied condition "Succeeded or Failed" Jan 24 18:06:02.225: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-55f5d0a2-1f47-4fa6-bfc2-15e624b94b5a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:06:02.443: INFO: Waiting for pod pod-55f5d0a2-1f47-4fa6-bfc2-15e624b94b5a to disappear Jan 24 18:06:02.547: INFO: Pod pod-55f5d0a2-1f47-4fa6-bfc2-15e624b94b5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:06:02.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5982" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":24,"skipped":311,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Secrets�[0m �[1mshould be consumable from pods in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:06:02.762: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-28fcabf0-4886-4247-9bf4-ad83b1dc3eea �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:06:03.498: INFO: Waiting up to 5m0s for pod "pod-secrets-550f00f0-66fb-4b05-a801-2135f4b89849" in namespace "secrets-1413" to be "Succeeded or Failed" Jan 24 18:06:03.603: INFO: Pod "pod-secrets-550f00f0-66fb-4b05-a801-2135f4b89849": Phase="Pending", Reason="", readiness=false. Elapsed: 103.997397ms Jan 24 18:06:05.708: INFO: Pod "pod-secrets-550f00f0-66fb-4b05-a801-2135f4b89849": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209613044s �[1mSTEP�[0m: Saw pod success Jan 24 18:06:05.708: INFO: Pod "pod-secrets-550f00f0-66fb-4b05-a801-2135f4b89849" satisfied condition "Succeeded or Failed" Jan 24 18:06:05.812: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-secrets-550f00f0-66fb-4b05-a801-2135f4b89849 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:06:06.029: INFO: Waiting for pod pod-secrets-550f00f0-66fb-4b05-a801-2135f4b89849 to disappear Jan 24 18:06:06.132: INFO: Pod pod-secrets-550f00f0-66fb-4b05-a801-2135f4b89849 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:06:06.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1413" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":25,"skipped":318,"failed":0} �[90m------------------------------�[0m �[0m[sig-node] Container Runtime�[0m �[90mblackbox test�[0m �[0mwhen starting a container that exits�[0m �[1mshould run with the expected status [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:06:06.348: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Phase' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should get the expected 'State' �[1mSTEP�[0m: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:06:32.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-7504" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":346,"completed":26,"skipped":318,"failed":0} �[90m------------------------------�[0m �[0m[sig-apps] ReplicaSet�[0m �[1mshould adopt matching pods on creation and release no longer matching pods [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:06:32.818: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created Jan 24 18:06:33.548: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:06:35.652: INFO: The status of Pod pod-adoption-release is Running (Ready = true) �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Jan 24 18:06:36.067: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:06:36.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-7198" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":346,"completed":27,"skipped":318,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]�[0m �[1mworks for CRD without validation schema [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:06:36.601: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:06:37.121: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 24 18:06:42.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-637 --namespace=crd-publish-openapi-637 create -f -' Jan 24 18:06:44.032: INFO: stderr: "" Jan 24 18:06:44.032: INFO: stdout: "e2e-test-crd-publish-openapi-7116-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 24 18:06:44.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-637 --namespace=crd-publish-openapi-637 delete e2e-test-crd-publish-openapi-7116-crds test-cr' Jan 24 18:06:44.617: INFO: stderr: "" Jan 24 18:06:44.617: INFO: stdout: "e2e-test-crd-publish-openapi-7116-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 24 18:06:44.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-637 --namespace=crd-publish-openapi-637 apply -f -' Jan 24 18:06:45.678: INFO: stderr: "" Jan 24 18:06:45.678: INFO: stdout: "e2e-test-crd-publish-openapi-7116-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 24 18:06:45.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-637 --namespace=crd-publish-openapi-637 delete e2e-test-crd-publish-openapi-7116-crds test-cr' Jan 24 18:06:46.274: INFO: stderr: "" Jan 24 18:06:46.274: INFO: stdout: "e2e-test-crd-publish-openapi-7116-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR without validation schema Jan 24 18:06:46.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-637 explain e2e-test-crd-publish-openapi-7116-crds' Jan 24 18:06:47.014: INFO: stderr: "" Jan 24 18:06:47.014: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-7116-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:06:52.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-637" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":346,"completed":28,"skipped":325,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected secret�[0m �[1moptional updates should be reflected in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:06:52.564: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-83d19eba-d505-4050-848a-795e9803e32b �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-2faa588e-441a-4438-ad72-7aeedced7121 �[1mSTEP�[0m: Creating the pod Jan 24 18:06:53.609: INFO: The status of Pod pod-projected-secrets-fcfabc15-2a38-4b9d-89ba-cb9d9da7211e is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:06:55.714: INFO: The status of Pod pod-projected-secrets-fcfabc15-2a38-4b9d-89ba-cb9d9da7211e is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:06:57.715: INFO: The status of Pod pod-projected-secrets-fcfabc15-2a38-4b9d-89ba-cb9d9da7211e is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-83d19eba-d505-4050-848a-795e9803e32b �[1mSTEP�[0m: Updating secret s-test-opt-upd-2faa588e-441a-4438-ad72-7aeedced7121 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-c2f8ada3-d81b-4d1c-8bba-976f10ee755e �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:08:04.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-879" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":29,"skipped":328,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mshould be able to deny attaching pod [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:08:04.311: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:08:06.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180485, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180485, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180485, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180485, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:08:09.278: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Jan 24 18:08:11.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-3677 attach --namespace=webhook-3677 to-be-attached-pod -i -c=container1' Jan 24 18:08:12.774: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:08:12.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3677" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3677-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":346,"completed":30,"skipped":338,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould run and stop simple daemon [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:08:13.663: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 [It] should run and stop simple daemon [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating simple DaemonSet "daemon-set" �[1mSTEP�[0m: Check that daemon pods launch on every node of the cluster. Jan 24 18:08:14.852: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:14.957: INFO: Number of nodes with available pods: 0 Jan 24 18:08:14.957: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:16.066: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:16.171: INFO: Number of nodes with available pods: 0 Jan 24 18:08:16.171: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:17.065: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:17.171: INFO: Number of nodes with available pods: 0 Jan 24 18:08:17.171: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:18.066: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:18.171: INFO: Number of nodes with available pods: 1 Jan 24 18:08:18.171: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:19.066: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:19.171: INFO: Number of nodes with available pods: 2 Jan 24 18:08:19.171: INFO: Number of running nodes: 2, number of available pods: 2 �[1mSTEP�[0m: Stop a daemon pod, check that the daemon pod is revived. Jan 24 18:08:19.594: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:19.698: INFO: Number of nodes with available pods: 1 Jan 24 18:08:19.698: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:20.809: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:20.913: INFO: Number of nodes with available pods: 1 Jan 24 18:08:20.913: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:21.808: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:21.913: INFO: Number of nodes with available pods: 1 Jan 24 18:08:21.913: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:22.808: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:22.913: INFO: Number of nodes with available pods: 1 Jan 24 18:08:22.913: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:08:23.813: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:08:23.918: INFO: Number of nodes with available pods: 2 Jan 24 18:08:23.918: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-636, will wait for the garbage collector to delete the pods Jan 24 18:08:24.383: INFO: Deleting DaemonSet.extensions daemon-set took: 105.985912ms Jan 24 18:08:24.484: INFO: Terminating DaemonSet.extensions daemon-set pods took: 101.132851ms Jan 24 18:08:26.588: INFO: Number of nodes with available pods: 0 Jan 24 18:08:26.588: INFO: Number of running nodes: 0, number of available pods: 0 Jan 24 18:08:26.690: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"4288"},"items":null} Jan 24 18:08:26.793: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"4288"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:08:27.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-636" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":346,"completed":31,"skipped":343,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:08:27.324: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Jan 24 18:08:27.953: INFO: Waiting up to 5m0s for pod "pod-199a75fa-9885-49e0-9aeb-814cc9a014dd" in namespace "emptydir-2039" to be "Succeeded or Failed" Jan 24 18:08:28.056: INFO: Pod "pod-199a75fa-9885-49e0-9aeb-814cc9a014dd": Phase="Pending", Reason="", readiness=false. Elapsed: 102.53577ms Jan 24 18:08:30.161: INFO: Pod "pod-199a75fa-9885-49e0-9aeb-814cc9a014dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207932515s �[1mSTEP�[0m: Saw pod success Jan 24 18:08:30.162: INFO: Pod "pod-199a75fa-9885-49e0-9aeb-814cc9a014dd" satisfied condition "Succeeded or Failed" Jan 24 18:08:30.265: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-199a75fa-9885-49e0-9aeb-814cc9a014dd container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:08:30.497: INFO: Waiting for pod pod-199a75fa-9885-49e0-9aeb-814cc9a014dd to disappear Jan 24 18:08:30.600: INFO: Pod pod-199a75fa-9885-49e0-9aeb-814cc9a014dd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:08:30.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2039" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":32,"skipped":368,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould have session affinity work for NodePort service [LinuxOnly] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:08:30.818: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-8527 �[1mSTEP�[0m: creating service affinity-nodeport in namespace services-8527 �[1mSTEP�[0m: creating replication controller affinity-nodeport in namespace services-8527 I0124 18:08:31.567238 14 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-8527, replica count: 3 I0124 18:08:34.719499 14 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 18:08:35.034: INFO: Creating new exec pod Jan 24 18:08:38.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8527 exec execpod-affinity72qlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Jan 24 18:08:39.684: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport 80\n+ echo hostName\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jan 24 18:08:39.684: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:08:39.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8527 exec execpod-affinity72qlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.111.82.79 80' Jan 24 18:08:40.884: INFO: stderr: "+ + echo hostName\nnc -v -t -w 2 10.111.82.79 80\nConnection to 10.111.82.79 80 port [tcp/http] succeeded!\n" Jan 24 18:08:40.884: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:08:40.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8527 exec execpod-affinity72qlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.1.0.4 31022' Jan 24 18:08:42.079: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.1.0.4 31022\nConnection to 10.1.0.4 31022 port [tcp/*] succeeded!\n" Jan 24 18:08:42.079: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:08:42.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8527 exec execpod-affinity72qlg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.1.0.5 31022' Jan 24 18:08:43.294: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.1.0.5 31022\nConnection to 10.1.0.5 31022 port [tcp/*] succeeded!\n" Jan 24 18:08:43.294: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:08:43.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8527 exec execpod-affinity72qlg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.1.0.4:31022/ ; done' Jan 24 18:08:44.608: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31022/\n" Jan 24 18:08:44.608: INFO: stdout: "\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq\naffinity-nodeport-4k9tq" Jan 24 18:08:44.608: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Received response from host: affinity-nodeport-4k9tq Jan 24 18:08:44.609: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport in namespace services-8527, will wait for the garbage collector to delete the pods Jan 24 18:08:45.089: INFO: Deleting ReplicationController affinity-nodeport took: 104.604402ms Jan 24 18:08:45.191: INFO: Terminating ReplicationController affinity-nodeport pods took: 101.224819ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:08:47.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8527" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":33,"skipped":375,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Deployment�[0m �[1mdeployment should delete old replica sets [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:08:48.033: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:08:48.764: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 24 18:08:50.970: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 24 18:08:53.853: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-7318 f6fdd7ca-ab21-4abc-8944-90b93e9ce610 4593 1 2023-01-24 18:08:51 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-01-24 18:08:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:08:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0031a67a8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-24 18:08:51 +0000 UTC,LastTransitionTime:2023-01-24 18:08:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-5b4d99b59b" has successfully progressed.,LastUpdateTime:2023-01-24 18:08:53 +0000 UTC,LastTransitionTime:2023-01-24 18:08:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 24 18:08:53.961: INFO: New ReplicaSet "test-cleanup-deployment-5b4d99b59b" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5b4d99b59b deployment-7318 a377eaf6-e293-435b-8097-ed0d0e43cc9c 4583 1 2023-01-24 18:08:51 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment f6fdd7ca-ab21-4abc-8944-90b93e9ce610 0xc002b201f7 0xc002b201f8}] [] [{kube-controller-manager Update apps/v1 2023-01-24 18:08:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6fdd7ca-ab21-4abc-8944-90b93e9ce610\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:08:53 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5b4d99b59b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b20418 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 24 18:08:54.065: INFO: Pod "test-cleanup-deployment-5b4d99b59b-9btlc" is available: &Pod{ObjectMeta:{test-cleanup-deployment-5b4d99b59b-9btlc test-cleanup-deployment-5b4d99b59b- deployment-7318 6395a7cf-97a7-477d-b413-a762d1dfa240 4582 0 2023-01-24 18:08:51 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5b4d99b59b] map[cni.projectcalico.org/containerID:3f0a6ee8d4cb3a0c2f6104cea1b999aa255ed225d8bcc859bbe7cd5e3198e349 cni.projectcalico.org/podIP:192.168.237.159/32 cni.projectcalico.org/podIPs:192.168.237.159/32] [{apps/v1 ReplicaSet test-cleanup-deployment-5b4d99b59b a377eaf6-e293-435b-8097-ed0d0e43cc9c 0xc002b20a97 0xc002b20a98}] [] [{calico Update v1 2023-01-24 18:08:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kube-controller-manager Update v1 2023-01-24 18:08:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a377eaf6-e293-435b-8097-ed0d0e43cc9c\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:08:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.159\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qdqh5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qdqh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:08:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:08:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:08:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:08:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.159,StartTime:2023-01-24 18:08:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:08:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://b24217ec00350798e1d31603dfbd59b2510ac88964d36419d1c9d21de5fe1f4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:08:54.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7318" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":346,"completed":34,"skipped":395,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould run through the lifecycle of Pods and PodStatus [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:08:54.280: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Jan 24 18:08:55.174: INFO: observed Pod pod-test in namespace pods-685 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jan 24 18:08:55.174: INFO: observed Pod pod-test in namespace pods-685 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC }] Jan 24 18:08:55.174: INFO: observed Pod pod-test in namespace pods-685 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC }] Jan 24 18:08:55.649: INFO: observed Pod pod-test in namespace pods-685 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC }] Jan 24 18:08:56.477: INFO: Found Pod pod-test in namespace pods-685 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:08:55 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Jan 24 18:08:56.685: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Jan 24 18:08:57.210: INFO: observed event type ADDED Jan 24 18:08:57.210: INFO: observed event type MODIFIED Jan 24 18:08:57.212: INFO: observed event type MODIFIED Jan 24 18:08:57.212: INFO: observed event type MODIFIED Jan 24 18:08:57.213: INFO: observed event type MODIFIED Jan 24 18:08:57.213: INFO: observed event type MODIFIED Jan 24 18:08:57.213: INFO: observed event type MODIFIED Jan 24 18:08:57.214: INFO: observed event type MODIFIED Jan 24 18:08:58.483: INFO: observed event type MODIFIED Jan 24 18:08:58.821: INFO: observed event type MODIFIED Jan 24 18:08:59.488: INFO: observed event type MODIFIED Jan 24 18:08:59.499: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:08:59.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-685" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":346,"completed":35,"skipped":400,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould provide podname only [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:08:59.721: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:09:00.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c223000-d161-4604-9d38-2c7c16a694c3" in namespace "downward-api-6309" to be "Succeeded or Failed" Jan 24 18:09:00.445: INFO: Pod "downwardapi-volume-9c223000-d161-4604-9d38-2c7c16a694c3": Phase="Pending", Reason="", readiness=false. Elapsed: 102.706728ms Jan 24 18:09:02.550: INFO: Pod "downwardapi-volume-9c223000-d161-4604-9d38-2c7c16a694c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207817426s �[1mSTEP�[0m: Saw pod success Jan 24 18:09:02.550: INFO: Pod "downwardapi-volume-9c223000-d161-4604-9d38-2c7c16a694c3" satisfied condition "Succeeded or Failed" Jan 24 18:09:02.655: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downwardapi-volume-9c223000-d161-4604-9d38-2c7c16a694c3 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:09:02.871: INFO: Waiting for pod downwardapi-volume-9c223000-d161-4604-9d38-2c7c16a694c3 to disappear Jan 24 18:09:02.974: INFO: Pod downwardapi-volume-9c223000-d161-4604-9d38-2c7c16a694c3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:09:02.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6309" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":346,"completed":36,"skipped":408,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Deployment�[0m �[1mshould run the lifecycle of a Deployment [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:09:03.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Jan 24 18:09:04.117: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:04.117: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:04.117: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:04.117: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:04.118: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:04.118: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:04.118: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:04.118: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 24 18:09:05.323: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 24 18:09:05.323: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 24 18:09:05.519: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Jan 24 18:09:05.731: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Jan 24 18:09:05.833: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.833: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.833: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.833: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.834: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.834: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.834: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.834: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 0 Jan 24 18:09:05.835: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:05.835: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:05.835: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.835: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.836: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.836: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.836: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.836: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.837: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.837: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:05.837: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:05.837: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:05.837: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:05.837: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:07.341: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:07.341: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:07.366: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Jan 24 18:09:07.470: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Jan 24 18:09:07.681: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Jan 24 18:09:07.889: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:07.889: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:07.890: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:07.890: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:07.891: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:09.339: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:09.407: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:09.419: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 24 18:09:10.562: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Jan 24 18:09:10.996: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:10.996: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:10.997: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:10.999: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:11.000: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 1 Jan 24 18:09:11.000: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:11.001: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:11.001: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 2 Jan 24 18:09:11.001: INFO: observed Deployment test-deployment in namespace deployment-8019 with ReadyReplicas 3 �[1mSTEP�[0m: deleting the Deployment Jan 24 18:09:11.230: INFO: observed event type MODIFIED Jan 24 18:09:11.230: INFO: observed event type MODIFIED Jan 24 18:09:11.231: INFO: observed event type MODIFIED Jan 24 18:09:11.232: INFO: observed event type MODIFIED Jan 24 18:09:11.232: INFO: observed event type MODIFIED Jan 24 18:09:11.233: INFO: observed event type MODIFIED Jan 24 18:09:11.233: INFO: observed event type MODIFIED Jan 24 18:09:11.234: INFO: observed event type MODIFIED Jan 24 18:09:11.234: INFO: observed event type MODIFIED Jan 24 18:09:11.235: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 24 18:09:11.339: INFO: Log out all the ReplicaSets if there is no deployment created [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:09:11.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-8019" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":346,"completed":37,"skipped":410,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] server version�[0m �[1mshould find the server version [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:09:11.653: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Jan 24 18:09:12.270: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Jan 24 18:09:12.270: INFO: cleanMinorVersion: 22 Jan 24 18:09:12.271: INFO: Minor version: 22 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:09:12.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-5765" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":346,"completed":38,"skipped":414,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Container Lifecycle Hook�[0m �[90mwhen create a pod with lifecycle hook�[0m �[1mshould execute prestop http hook properly [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:09:12.483: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 24 18:09:13.210: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:09:15.314: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 24 18:09:15.627: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:09:17.731: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 24 18:09:17.937: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 18:09:18.041: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 18:09:20.042: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 18:09:20.146: INFO: Pod pod-with-prestop-http-hook still exists Jan 24 18:09:22.042: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 24 18:09:22.145: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:09:22.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-2041" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":346,"completed":39,"skipped":443,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:09:22.467: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-3666 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-3666 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-3666 I0124 18:09:23.197699 14 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-3666, replica count: 3 I0124 18:09:26.348721 14 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 18:09:26.554: INFO: Creating new exec pod Jan 24 18:09:29.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3666 exec execpod-affinityp6rgp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 24 18:09:31.010: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-transition 80\n+ echo hostName\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 24 18:09:31.010: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:09:31.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3666 exec execpod-affinityp6rgp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.101.173.118 80' Jan 24 18:09:32.156: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 10.101.173.118 80\nConnection to 10.101.173.118 80 port [tcp/http] succeeded!\n" Jan 24 18:09:32.156: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:09:32.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3666 exec execpod-affinityp6rgp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.101.173.118:80/ ; done' Jan 24 18:09:33.623: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n" Jan 24 18:09:33.623: INFO: stdout: "\naffinity-clusterip-transition-jz26j\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-5vn4f\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-jz26j\naffinity-clusterip-transition-5vn4f\naffinity-clusterip-transition-5vn4f\naffinity-clusterip-transition-5vn4f\naffinity-clusterip-transition-5vn4f\naffinity-clusterip-transition-5vn4f\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-jz26j\naffinity-clusterip-transition-5vn4f\naffinity-clusterip-transition-5vn4f" Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-jz26j Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-jz26j Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-jz26j Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.623: INFO: Received response from host: affinity-clusterip-transition-5vn4f Jan 24 18:09:33.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3666 exec execpod-affinityp6rgp -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.101.173.118:80/ ; done' Jan 24 18:09:35.111: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.101.173.118:80/\n" Jan 24 18:09:35.111: INFO: stdout: "\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4\naffinity-clusterip-transition-xl4s4" Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Received response from host: affinity-clusterip-transition-xl4s4 Jan 24 18:09:35.111: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-3666, will wait for the garbage collector to delete the pods Jan 24 18:09:35.584: INFO: Deleting ReplicationController affinity-clusterip-transition took: 104.870325ms Jan 24 18:09:35.685: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.907023ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:09:38.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3666" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":40,"skipped":547,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl server-side dry-run�[0m �[1mshould check if kubectl can dry-run update Pods [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:09:38.321: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jan 24 18:09:38.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1826 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 24 18:09:39.253: INFO: stderr: "" Jan 24 18:09:39.253: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Jan 24 18:09:39.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1826 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-1"}]}} --dry-run=server' Jan 24 18:09:40.344: INFO: stderr: "" Jan 24 18:09:40.344: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jan 24 18:09:40.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1826 delete pods e2e-test-httpd-pod' Jan 24 18:09:43.472: INFO: stderr: "" Jan 24 18:09:43.472: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:09:43.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1826" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":346,"completed":41,"skipped":548,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[90mPriorityClass endpoints�[0m �[1mverify PriorityClass endpoints can be operated with different HTTP methods [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:09:43.689: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 24 18:09:44.515: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 18:10:45.426: INFO: Waiting for terminating namespaces to be deleted... [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:10:45.529: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption-path �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:679 [It] verify PriorityClass endpoints can be operated with different HTTP methods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:10:46.356: INFO: PriorityClass.scheduling.k8s.io "p1" is invalid: Value: Forbidden: may not be changed in an update. Jan 24 18:10:46.460: INFO: PriorityClass.scheduling.k8s.io "p2" is invalid: Value: Forbidden: may not be changed in an update. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:10:46.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-path-2042" for this suite. [AfterEach] PriorityClass endpoints /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:693 [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:10:47.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-2762" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":346,"completed":42,"skipped":560,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] DisruptionController�[0m �[1mshould observe PodDisruptionBudget status updated [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:10:48.042: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running Jan 24 18:10:49.182: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:10:51.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7313" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":346,"completed":43,"skipped":561,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould be submitted and removed [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:10:51.603: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Jan 24 18:10:52.329: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:11:00.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4187" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":346,"completed":44,"skipped":571,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected configMap�[0m �[1moptional updates should be reflected in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:11:01.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-03b83636-d4b8-4fa1-95c9-9eb8afae60d3 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-c108fbd1-35ec-4e90-a412-37ff8e381033 �[1mSTEP�[0m: Creating the pod Jan 24 18:11:02.171: INFO: The status of Pod pod-projected-configmaps-2f1efe03-3f0f-48c7-a660-251e8cf0aa96 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:11:04.277: INFO: The status of Pod pod-projected-configmaps-2f1efe03-3f0f-48c7-a660-251e8cf0aa96 is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-03b83636-d4b8-4fa1-95c9-9eb8afae60d3 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-c108fbd1-35ec-4e90-a412-37ff8e381033 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-b2453f06-57fc-4b93-b778-1200930cd5e2 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:12:23.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2069" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":45,"skipped":578,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Docker Containers�[0m �[1mshould be able to override the image's default command and arguments [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:12:23.497: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test override all Jan 24 18:12:24.124: INFO: Waiting up to 5m0s for pod "client-containers-65c3b093-8a42-44c3-b73a-4e2e093c1681" in namespace "containers-6280" to be "Succeeded or Failed" Jan 24 18:12:24.227: INFO: Pod "client-containers-65c3b093-8a42-44c3-b73a-4e2e093c1681": Phase="Pending", Reason="", readiness=false. Elapsed: 103.665925ms Jan 24 18:12:26.331: INFO: Pod "client-containers-65c3b093-8a42-44c3-b73a-4e2e093c1681": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207248062s �[1mSTEP�[0m: Saw pod success Jan 24 18:12:26.331: INFO: Pod "client-containers-65c3b093-8a42-44c3-b73a-4e2e093c1681" satisfied condition "Succeeded or Failed" Jan 24 18:12:26.435: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod client-containers-65c3b093-8a42-44c3-b73a-4e2e093c1681 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:12:26.659: INFO: Waiting for pod client-containers-65c3b093-8a42-44c3-b73a-4e2e093c1681 to disappear Jan 24 18:12:26.761: INFO: Pod client-containers-65c3b093-8a42-44c3-b73a-4e2e093c1681 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:12:26.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-6280" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":346,"completed":46,"skipped":585,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mUpdate Demo�[0m �[1mshould create and stop a replication controller [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:12:26.976: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a replication controller Jan 24 18:12:27.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 create -f -' Jan 24 18:12:28.205: INFO: stderr: "" Jan 24 18:12:28.205: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 24 18:12:28.205: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 24 18:12:28.633: INFO: stderr: "" Jan 24 18:12:28.633: INFO: stdout: "update-demo-nautilus-ft6vx update-demo-nautilus-vkk4r " Jan 24 18:12:28.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods update-demo-nautilus-ft6vx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 24 18:12:29.043: INFO: stderr: "" Jan 24 18:12:29.043: INFO: stdout: "" Jan 24 18:12:29.043: INFO: update-demo-nautilus-ft6vx is created but not running Jan 24 18:12:34.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 24 18:12:34.468: INFO: stderr: "" Jan 24 18:12:34.468: INFO: stdout: "update-demo-nautilus-ft6vx update-demo-nautilus-vkk4r " Jan 24 18:12:34.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods update-demo-nautilus-ft6vx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 24 18:12:34.880: INFO: stderr: "" Jan 24 18:12:34.880: INFO: stdout: "" Jan 24 18:12:34.880: INFO: update-demo-nautilus-ft6vx is created but not running Jan 24 18:12:39.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 24 18:12:40.307: INFO: stderr: "" Jan 24 18:12:40.307: INFO: stdout: "update-demo-nautilus-ft6vx update-demo-nautilus-vkk4r " Jan 24 18:12:40.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods update-demo-nautilus-ft6vx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 24 18:12:40.717: INFO: stderr: "" Jan 24 18:12:40.717: INFO: stdout: "true" Jan 24 18:12:40.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods update-demo-nautilus-ft6vx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 24 18:12:41.126: INFO: stderr: "" Jan 24 18:12:41.126: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jan 24 18:12:41.126: INFO: validating pod update-demo-nautilus-ft6vx Jan 24 18:12:41.234: INFO: got data: { "image": "nautilus.jpg" } Jan 24 18:12:41.234: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 18:12:41.234: INFO: update-demo-nautilus-ft6vx is verified up and running Jan 24 18:12:41.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods update-demo-nautilus-vkk4r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 24 18:12:41.635: INFO: stderr: "" Jan 24 18:12:41.635: INFO: stdout: "true" Jan 24 18:12:41.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods update-demo-nautilus-vkk4r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 24 18:12:42.026: INFO: stderr: "" Jan 24 18:12:42.026: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.4" Jan 24 18:12:42.026: INFO: validating pod update-demo-nautilus-vkk4r Jan 24 18:12:42.131: INFO: got data: { "image": "nautilus.jpg" } Jan 24 18:12:42.131: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 24 18:12:42.131: INFO: update-demo-nautilus-vkk4r is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 24 18:12:42.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 delete --grace-period=0 --force -f -' Jan 24 18:12:42.638: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 24 18:12:42.638: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 24 18:12:42.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get rc,svc -l name=update-demo --no-headers' Jan 24 18:12:43.153: INFO: stderr: "No resources found in kubectl-7560 namespace.\n" Jan 24 18:12:43.153: INFO: stdout: "" Jan 24 18:12:43.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7560 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 24 18:12:43.557: INFO: stderr: "" Jan 24 18:12:43.557: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:12:43.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7560" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":346,"completed":47,"skipped":589,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mshould mutate custom resource with pruning [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:12:43.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:12:46.118: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180765, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180765, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180765, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810180765, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:12:49.332: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:12:49.434: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-5081-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:12:52.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1665" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1665-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":346,"completed":48,"skipped":618,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Networking�[0m �[90mGranular Checks: Pods�[0m �[1mshould function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:12:53.282: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4017 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 24 18:12:53.798: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 24 18:12:54.360: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:12:56.465: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:12:58.467: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:00.464: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:02.466: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:04.464: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:06.464: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:08.463: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:10.464: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:12.463: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 24 18:13:14.464: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 24 18:13:14.668: INFO: The status of Pod netserver-1 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 24 18:13:17.494: INFO: Setting MaxTries for pod polling to 34 for networking test based on endpoint count 2 Jan 24 18:13:17.494: INFO: Going to poll 192.168.237.171 on port 8083 at least 0 times, with a maximum of 34 tries before failing Jan 24 18:13:17.596: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.237.171:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4017 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:13:17.596: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:13:18.333: INFO: Found all 1 expected endpoints: [netserver-0] Jan 24 18:13:18.333: INFO: Going to poll 192.168.69.232 on port 8083 at least 0 times, with a maximum of 34 tries before failing Jan 24 18:13:18.435: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://192.168.69.232:8083/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4017 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:13:18.435: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:13:19.156: INFO: Found all 1 expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:13:19.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4017" for this suite. �[32m•�[0m{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":49,"skipped":654,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPredicates [Serial]�[0m �[1mvalidates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:13:19.370: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-pred �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:90 Jan 24 18:13:19.882: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 24 18:13:20.095: INFO: Waiting for terminating namespaces to be deleted... Jan 24 18:13:20.196: INFO: Logging pods the apiserver thinks is on node capz-conf-ewh6sx-md-0-tb56s before test Jan 24 18:13:20.310: INFO: calico-node-wcsbx from kube-system started at 2023-01-24 17:58:41 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.310: INFO: Container calico-node ready: true, restart count 0 Jan 24 18:13:20.310: INFO: kube-proxy-f8pq9 from kube-system started at 2023-01-24 17:58:41 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.310: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 18:13:20.310: INFO: netserver-0 from pod-network-test-4017 started at 2023-01-24 18:12:54 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.310: INFO: Container webserver ready: true, restart count 0 Jan 24 18:13:20.310: INFO: test-container-pod from pod-network-test-4017 started at 2023-01-24 18:13:14 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.310: INFO: Container webserver ready: true, restart count 0 Jan 24 18:13:20.310: INFO: Logging pods the apiserver thinks is on node capz-conf-ewh6sx-md-0-xf5qq before test Jan 24 18:13:20.418: INFO: calico-node-sdmsx from kube-system started at 2023-01-24 17:58:41 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.418: INFO: Container calico-node ready: true, restart count 0 Jan 24 18:13:20.418: INFO: kube-proxy-rjvxn from kube-system started at 2023-01-24 17:58:41 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.418: INFO: Container kube-proxy ready: true, restart count 0 Jan 24 18:13:20.418: INFO: host-test-container-pod from pod-network-test-4017 started at 2023-01-24 18:13:14 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.418: INFO: Container agnhost-container ready: true, restart count 0 Jan 24 18:13:20.418: INFO: netserver-1 from pod-network-test-4017 started at 2023-01-24 18:12:54 +0000 UTC (1 container statuses recorded) Jan 24 18:13:20.418: INFO: Container webserver ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. �[1mSTEP�[0m: Trying to apply a random label on the found node. �[1mSTEP�[0m: verifying the node has the label kubernetes.io/e2e-e7873a9e-7c61-4931-b2ce-c26bdc0c7541 95 �[1mSTEP�[0m: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled �[1mSTEP�[0m: Trying to create another pod(pod5) with hostport 54322 but hostIP 10.1.0.5 on the node which pod4 resides and expect not scheduled �[1mSTEP�[0m: removing the label kubernetes.io/e2e-e7873a9e-7c61-4931-b2ce-c26bdc0c7541 off the node capz-conf-ewh6sx-md-0-xf5qq �[1mSTEP�[0m: verifying the node doesn't have the label kubernetes.io/e2e-e7873a9e-7c61-4931-b2ce-c26bdc0c7541 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:18:26.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-pred-1867" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 �[32m• [SLOW TEST:307.140 seconds]�[0m [sig-scheduling] SchedulerPredicates [Serial] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40�[0m validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":346,"completed":50,"skipped":681,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] PodTemplates�[0m �[1mshould delete a collection of pod templates [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:18:26.511: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create set of pod templates Jan 24 18:18:27.127: INFO: created test-podtemplate-1 Jan 24 18:18:27.231: INFO: created test-podtemplate-2 Jan 24 18:18:27.334: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 24 18:18:27.436: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 24 18:18:27.546: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:18:27.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-7759" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":346,"completed":51,"skipped":728,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:18:27.861: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-9743 Jan 24 18:18:28.579: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:18:30.683: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jan 24 18:18:30.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 24 18:18:32.513: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 24 18:18:32.513: INFO: stdout: "iptables" Jan 24 18:18:32.513: INFO: proxyMode: iptables Jan 24 18:18:32.620: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 24 18:18:32.722: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-9743 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-9743 I0124 18:18:32.947577 14 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-9743, replica count: 3 I0124 18:18:36.098993 14 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 18:18:36.411: INFO: Creating new exec pod Jan 24 18:18:39.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 24 18:18:40.975: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jan 24 18:18:40.975: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:18:40.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.111.221.254 80' Jan 24 18:18:42.114: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.111.221.254 80\nConnection to 10.111.221.254 80 port [tcp/http] succeeded!\n" Jan 24 18:18:42.114: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:18:42.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.1.0.4 31161' Jan 24 18:18:43.252: INFO: stderr: "+ nc -v -t -w 2 10.1.0.4 31161\n+ echo hostName\nConnection to 10.1.0.4 31161 port [tcp/*] succeeded!\n" Jan 24 18:18:43.252: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:18:43.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.1.0.5 31161' Jan 24 18:18:44.381: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.1.0.5 31161\nConnection to 10.1.0.5 31161 port [tcp/*] succeeded!\n" Jan 24 18:18:44.381: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:18:44.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.1.0.4:31161/ ; done' Jan 24 18:18:45.597: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n" Jan 24 18:18:45.597: INFO: stdout: "\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc\naffinity-nodeport-timeout-xbkrc" Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Received response from host: affinity-nodeport-timeout-xbkrc Jan 24 18:18:45.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.1.0.4:31161/' Jan 24 18:18:46.715: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n" Jan 24 18:18:46.715: INFO: stdout: "affinity-nodeport-timeout-xbkrc" Jan 24 18:19:06.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.1.0.4:31161/' Jan 24 18:19:07.858: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n" Jan 24 18:19:07.859: INFO: stdout: "affinity-nodeport-timeout-xbkrc" Jan 24 18:19:27.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9743 exec execpod-affinityhlclg -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.1.0.4:31161/' Jan 24 18:19:28.991: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.1.0.4:31161/\n" Jan 24 18:19:28.991: INFO: stdout: "affinity-nodeport-timeout-8466w" Jan 24 18:19:28.991: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-9743, will wait for the garbage collector to delete the pods Jan 24 18:19:29.558: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 103.221562ms Jan 24 18:19:29.659: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.86174ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:19:31.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9743" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":346,"completed":52,"skipped":761,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected configMap�[0m �[1mshould be consumable from pods in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:19:32.198: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-d28bace9-f9fe-4b79-aeb9-d68be9dfcdd8 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 24 18:19:32.923: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87913b8f-0f33-4aba-8b4a-55f08eac143f" in namespace "projected-6983" to be "Succeeded or Failed" Jan 24 18:19:33.025: INFO: Pod "pod-projected-configmaps-87913b8f-0f33-4aba-8b4a-55f08eac143f": Phase="Pending", Reason="", readiness=false. Elapsed: 101.866403ms Jan 24 18:19:35.129: INFO: Pod "pod-projected-configmaps-87913b8f-0f33-4aba-8b4a-55f08eac143f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20618561s �[1mSTEP�[0m: Saw pod success Jan 24 18:19:35.129: INFO: Pod "pod-projected-configmaps-87913b8f-0f33-4aba-8b4a-55f08eac143f" satisfied condition "Succeeded or Failed" Jan 24 18:19:35.232: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-projected-configmaps-87913b8f-0f33-4aba-8b4a-55f08eac143f container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:19:35.455: INFO: Waiting for pod pod-projected-configmaps-87913b8f-0f33-4aba-8b4a-55f08eac143f to disappear Jan 24 18:19:35.557: INFO: Pod pod-projected-configmaps-87913b8f-0f33-4aba-8b4a-55f08eac143f no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:19:35.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6983" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":53,"skipped":769,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:19:35.767: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 24 18:21:37.406: INFO: Successfully updated pod "var-expansion-1307a5a4-7dfa-431f-ac71-983fb8c610fd" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 24 18:21:39.611: INFO: Deleting pod "var-expansion-1307a5a4-7dfa-431f-ac71-983fb8c610fd" in namespace "var-expansion-4069" Jan 24 18:21:39.718: INFO: Wait up to 5m0s for pod "var-expansion-1307a5a4-7dfa-431f-ac71-983fb8c610fd" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:22:11.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-4069" for this suite. �[32m• [SLOW TEST:156.367 seconds]�[0m [sig-node] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":346,"completed":54,"skipped":777,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Deployment�[0m �[1mdeployment should support proportional scaling [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:22:12.136: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:22:12.648: INFO: Creating deployment "webserver-deployment" Jan 24 18:22:12.751: INFO: Waiting for observed generation 1 Jan 24 18:22:12.877: INFO: Waiting for all required pods to come up Jan 24 18:22:12.984: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 24 18:22:17.291: INFO: Waiting for deployment "webserver-deployment" to complete Jan 24 18:22:17.494: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 24 18:22:17.700: INFO: Updating deployment webserver-deployment Jan 24 18:22:17.700: INFO: Waiting for observed generation 2 Jan 24 18:22:19.951: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 24 18:22:20.053: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 24 18:22:20.165: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 24 18:22:20.475: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 24 18:22:20.475: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 24 18:22:20.578: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 24 18:22:20.782: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 24 18:22:20.782: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 24 18:22:20.997: INFO: Updating deployment webserver-deployment Jan 24 18:22:20.998: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 24 18:22:21.299: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 24 18:22:21.411: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 24 18:22:21.616: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9085 491afb48-21bc-42ed-858b-7230939bb828 7576 3 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002fc5f48 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-24 18:22:20 +0000 UTC,LastTransitionTime:2023-01-24 18:22:20 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2023-01-24 18:22:21 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 24 18:22:21.719: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-9085 826a2a90-8a4a-4cee-a827-db5c4af11163 7572 3 2023-01-24 18:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 491afb48-21bc-42ed-858b-7230939bb828 0xc004820557 0xc004820558}] [] [{kube-controller-manager Update apps/v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"491afb48-21bc-42ed-858b-7230939bb828\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048205f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 18:22:21.719: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 24 18:22:21.719: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb deployment-9085 bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 7557 3 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 491afb48-21bc-42ed-858b-7230939bb828 0xc004820667 0xc004820668}] [] [{kube-controller-manager Update apps/v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"491afb48-21bc-42ed-858b-7230939bb828\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:22:14 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0048206f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 24 18:22:21.844: INFO: Pod "webserver-deployment-795d758f88-4z4j9" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-4z4j9 webserver-deployment-795d758f88- deployment-9085 08e8300e-fb74-4e58-a7ce-8b4dc24a2d61 7542 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc001e51d10 0xc001e51d11}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9wwls,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9wwls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.844: INFO: Pod "webserver-deployment-795d758f88-8r4zb" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-8r4zb webserver-deployment-795d758f88- deployment-9085 badf3e17-e57a-4f02-8cb7-a33266db989b 7495 0 2023-01-24 18:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:ee91bd52048f1d287515f21debe1199d90798a9f8d84b3fb7fedb55a205de109 cni.projectcalico.org/podIP:192.168.237.183/32 cni.projectcalico.org/podIPs:192.168.237.183/32] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc001e51e70 0xc001e51e71}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.183\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pjmvz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjmvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.183,StartTime:2023-01-24 18:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ImagePullBackOff,Message:Back-off pulling image "webserver:404",},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.183,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.844: INFO: Pod "webserver-deployment-795d758f88-929w5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-929w5 webserver-deployment-795d758f88- deployment-9085 4aaba708-0732-46ad-bb0b-746ae1972a52 7485 0 2023-01-24 18:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:4569e03939c3b602c7a1c4eb2511334a9740d146a7fe22241de508dd518a53ec cni.projectcalico.org/podIP:192.168.69.242/32 cni.projectcalico.org/podIPs:192.168.69.242/32] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc000396570 0xc000396571}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.242\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6prpv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6prpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.242,StartTime:2023-01-24 18:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.242,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.845: INFO: Pod "webserver-deployment-795d758f88-9bmgv" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9bmgv webserver-deployment-795d758f88- deployment-9085 185c4388-13c6-498e-bbbf-1f8a78c7d8c5 7474 0 2023-01-24 18:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:01960d2e3c96df0f912d4c4ab5224aea5ff4889374af6ba0601b7ebeff0da501 cni.projectcalico.org/podIP:192.168.237.182/32 cni.projectcalico.org/podIPs:192.168.237.182/32] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0003971b0 0xc0003971b1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.182\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z29rn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z29rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.182,StartTime:2023-01-24 18:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.182,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.845: INFO: Pod "webserver-deployment-795d758f88-9sxk5" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9sxk5 webserver-deployment-795d758f88- deployment-9085 e63025de-2c1d-46c4-8853-a510791cd660 7549 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4190 0xc0047b4191}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dlxc8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dlxc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.845: INFO: Pod "webserver-deployment-795d758f88-bsswm" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-bsswm webserver-deployment-795d758f88- deployment-9085 bfb6f791-67b4-4180-856c-1bafa15b5061 7539 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4300 0xc0047b4301}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-br2kx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-br2kx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.845: INFO: Pod "webserver-deployment-795d758f88-jbzxq" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jbzxq webserver-deployment-795d758f88- deployment-9085 a95baa37-6db0-4df7-9002-f128163929d4 7558 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4460 0xc0047b4461}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-stnhp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stnhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.845: INFO: Pod "webserver-deployment-795d758f88-jh9ns" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jh9ns webserver-deployment-795d758f88- deployment-9085 add4aa67-1aae-46ea-8700-2efeaf46351d 7546 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b45c0 0xc0047b45c1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6xfgw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6xfgw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.846: INFO: Pod "webserver-deployment-795d758f88-t8gdn" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-t8gdn webserver-deployment-795d758f88- deployment-9085 ee29fc88-f26a-47c0-85a6-a8e3b3402e9b 7489 0 2023-01-24 18:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:3aee5a13aeb0de4d71d152e140d97bb1198e145542a22187480bae7fd2cf2f81 cni.projectcalico.org/podIP:192.168.69.243/32 cni.projectcalico.org/podIPs:192.168.69.243/32] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4720 0xc0047b4721}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.243\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5h24t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5h24t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.243,StartTime:2023-01-24 18:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.243,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.846: INFO: Pod "webserver-deployment-795d758f88-vp59p" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vp59p webserver-deployment-795d758f88- deployment-9085 660658a9-6b43-4deb-b924-67ff41203a7f 7556 0 2023-01-24 18:22:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4940 0xc0047b4941}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hsmhv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hsmhv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2023-01-24 18:22:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.846: INFO: Pod "webserver-deployment-795d758f88-vsqkz" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-vsqkz webserver-deployment-795d758f88- deployment-9085 76ebeb37-4b97-4f26-a850-079038b431ad 7541 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4b20 0xc0047b4b21}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-k6cdm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k6cdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.847: INFO: Pod "webserver-deployment-795d758f88-wwqg8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-wwqg8 webserver-deployment-795d758f88- deployment-9085 494d5f28-a58f-4f69-a316-c8ba41b0f803 7550 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4c90 0xc0047b4c91}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h5jn8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5jn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.847: INFO: Pod "webserver-deployment-795d758f88-zzqw8" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-zzqw8 webserver-deployment-795d758f88- deployment-9085 cd4f2507-cc03-4779-9ae4-23d7d49cc9c4 7499 0 2023-01-24 18:22:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[cni.projectcalico.org/containerID:2173141b28c7a9fe4f53b406ad9bd37813ebfc55f828f22728d70b2662d6143e cni.projectcalico.org/podIP:192.168.237.184/32 cni.projectcalico.org/podIPs:192.168.237.184/32] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 826a2a90-8a4a-4cee-a827-db5c4af11163 0xc0047b4e00 0xc0047b4e01}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"826a2a90-8a4a-4cee-a827-db5c4af11163\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:20 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.184\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6725k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6725k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.184,StartTime:2023-01-24 18:22:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.847: INFO: Pod "webserver-deployment-847dcfb7fb-2pct9" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2pct9 webserver-deployment-847dcfb7fb- deployment-9085 6275348f-2573-45fc-acf7-5211462875ae 7323 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:404795766dee8da809b21244782e208cfed7667999c5cf18c363accb01d52674 cni.projectcalico.org/podIP:192.168.237.177/32 cni.projectcalico.org/podIPs:192.168.237.177/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc0047b5050 0xc0047b5051}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.177\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-m24vt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m24vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.177,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://77778257e180f65a7a520c1cf82739df652c6b0dde327b49b29aa8c23a29b4ad,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.177,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.848: INFO: Pod "webserver-deployment-847dcfb7fb-2qqhp" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-2qqhp webserver-deployment-847dcfb7fb- deployment-9085 a6b2d5e9-3d57-42e6-9c77-1415ac92ef43 7538 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc0047b5250 0xc0047b5251}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5b47r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5b47r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.848: INFO: Pod "webserver-deployment-847dcfb7fb-4whlv" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-4whlv webserver-deployment-847dcfb7fb- deployment-9085 058d5031-8a03-44c0-890c-4a71f971caec 7543 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc0047b54a0 0xc0047b54a1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hmsv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hmsv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.849: INFO: Pod "webserver-deployment-847dcfb7fb-75np7" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-75np7 webserver-deployment-847dcfb7fb- deployment-9085 7b4dfb22-5812-467f-8198-3b1e2aebd95f 7537 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc0047b5be0 0xc0047b5be1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zcsbq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zcsbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.849: INFO: Pod "webserver-deployment-847dcfb7fb-7f22p" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-7f22p webserver-deployment-847dcfb7fb- deployment-9085 d855eaf4-1262-4dc1-a241-53dd9228d7c3 7352 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:072be642622666ea3576b4fafcf46f32d1ff87d4fd9b41b4e46d7d6da896b325 cni.projectcalico.org/podIP:192.168.237.179/32 cni.projectcalico.org/podIPs:192.168.237.179/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc0047b5d30 0xc0047b5d31}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.179\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-555lm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-555lm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.179,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://38608d2af169439c7f2bb4ac9f830afaa7ccd21364a1a871676ba86d4044e3b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.179,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.850: INFO: Pod "webserver-deployment-847dcfb7fb-8hg4q" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-8hg4q webserver-deployment-847dcfb7fb- deployment-9085 097c714d-8ff3-4ba4-8b30-1b4d6828c736 7568 0 2023-01-24 18:22:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc0047b5f20 0xc0047b5f21}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bhhgr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bhhgr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2023-01-24 18:22:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.850: INFO: Pod "webserver-deployment-847dcfb7fb-c7nv6" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-c7nv6 webserver-deployment-847dcfb7fb- deployment-9085 2eeb159a-4d55-4213-87f6-07525dc5788c 7374 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:868027520f180987f8bbdc7a7b969ee8971d7d8a26f52b5ad95fc8c56aa8f03c cni.projectcalico.org/podIP:192.168.237.181/32 cni.projectcalico.org/podIPs:192.168.237.181/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459a0d0 0xc00459a0d1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.181\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jvpz4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvpz4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.181,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://6084623444481392fd42304294d39039e0e02d1cd9755abdc59f7901878e7a2f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.181,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.850: INFO: Pod "webserver-deployment-847dcfb7fb-gq289" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gq289 webserver-deployment-847dcfb7fb- deployment-9085 d4f36241-ce16-4ae6-a1be-829f1bfc4f33 7359 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:e58060b1c77e94bca517e5e9b04eac8bd11ef4b98a3a00593d3d05c9501e710f cni.projectcalico.org/podIP:192.168.69.238/32 cni.projectcalico.org/podIPs:192.168.69.238/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459a2c0 0xc00459a2c1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.238\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9qn9l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qn9l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.238,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://4511fe89a0d44bddea38b9ff24f8bb2d9a1b16fcd11aacb92903cd898c86c2da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.238,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.851: INFO: Pod "webserver-deployment-847dcfb7fb-gvk49" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-gvk49 webserver-deployment-847dcfb7fb- deployment-9085 29e29fe1-c0ec-42f8-9e5c-ad44e818c078 7545 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459a4b0 0xc00459a4b1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wpg9n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wpg9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.851: INFO: Pod "webserver-deployment-847dcfb7fb-k6mmx" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-k6mmx webserver-deployment-847dcfb7fb- deployment-9085 81eab735-05e1-41ad-9427-1ced2351b981 7553 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459a600 0xc00459a601}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bc9dr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bc9dr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.851: INFO: Pod "webserver-deployment-847dcfb7fb-mhwp8" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-mhwp8 webserver-deployment-847dcfb7fb- deployment-9085 e08bd726-e61f-4490-a056-0e1e2e884f4e 7548 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459a750 0xc00459a751}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sd6gm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sd6gm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.852: INFO: Pod "webserver-deployment-847dcfb7fb-nj779" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nj779 webserver-deployment-847dcfb7fb- deployment-9085 477c050c-0ac9-4f6b-9973-38b43c946a5e 7361 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:48b87c0bf96f5aa383530b751d1e0bffd79fd3801ea983f0d1557dca35342854 cni.projectcalico.org/podIP:192.168.69.239/32 cni.projectcalico.org/podIPs:192.168.69.239/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459a8a0 0xc00459a8a1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.239\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-z2828,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2828,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.239,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://dc67d00f52c3bfaf74031fc79e1e0e2fcb14f93c70ca35fcd318efceda21d7e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.239,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.852: INFO: Pod "webserver-deployment-847dcfb7fb-nzdh8" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-nzdh8 webserver-deployment-847dcfb7fb- deployment-9085 869d1ead-394d-4608-a12e-6428350c0818 7577 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459aa90 0xc00459aa91}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hd75t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hd75t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2023-01-24 18:22:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.853: INFO: Pod "webserver-deployment-847dcfb7fb-qgvbj" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qgvbj webserver-deployment-847dcfb7fb- deployment-9085 fa043568-4fae-47d6-9b0e-207a445a14f3 7368 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:6380ac7ba5e19fb615b98be3fc91208c5105f295810347b222045fe6fee6a805 cni.projectcalico.org/podIP:192.168.69.240/32 cni.projectcalico.org/podIPs:192.168.69.240/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459ac40 0xc00459ac41}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.240\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gf9qj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gf9qj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.240,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://6d78ae009b995071ec35da1d38297856667a7cb00dc5c3df31550ecd0ed5b452,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.240,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.853: INFO: Pod "webserver-deployment-847dcfb7fb-qx6bc" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-qx6bc webserver-deployment-847dcfb7fb- deployment-9085 fff1ca06-b0f9-4310-a61e-7a147b96d9de 7581 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459ae30 0xc00459ae31}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nxg2f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxg2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2023-01-24 18:22:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.853: INFO: Pod "webserver-deployment-847dcfb7fb-s6kt8" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-s6kt8 webserver-deployment-847dcfb7fb- deployment-9085 d1fd0845-eb1c-4a5a-af0b-711c89bccd3e 7349 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:ffbf10b5eb000f5530df4f2fc1ed33a5544b3b100e55296fa540dabfdd09bb85 cni.projectcalico.org/podIP:192.168.237.178/32 cni.projectcalico.org/podIPs:192.168.237.178/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459afe0 0xc00459afe1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.237.178\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8qwmc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8qwmc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:192.168.237.178,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://ee58c9101f695360d547bc3f32611a06606599fe2dd276634df3cc43fb43c905,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.237.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.854: INFO: Pod "webserver-deployment-847dcfb7fb-snlhn" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-snlhn webserver-deployment-847dcfb7fb- deployment-9085 858ed3d2-6130-4bc5-9c9e-0a7bd8b51794 7564 0 2023-01-24 18:22:21 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459b1d0 0xc00459b1d1}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mfbwp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mfbwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:,StartTime:2023-01-24 18:22:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.854: INFO: Pod "webserver-deployment-847dcfb7fb-vf4nq" is available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vf4nq webserver-deployment-847dcfb7fb- deployment-9085 14d89cf0-5560-4f92-bc9d-e5063b014492 7331 0 2023-01-24 18:22:12 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:82a4d119f8ebfad90bd4406bc4082cb603d3c104ed80ba19fa7beea84c28a3b4 cni.projectcalico.org/podIP:192.168.69.237/32 cni.projectcalico.org/podIPs:192.168.69.237/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459b380 0xc00459b381}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:12 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.237\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lkqgm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkqgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.237,StartTime:2023-01-24 18:22:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:14 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://4c86fa00990c145e69d0a5458289a3e08bd21fce8c3ffc3b50bdcdedcdaf69a6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.855: INFO: Pod "webserver-deployment-847dcfb7fb-vqh74" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-vqh74 webserver-deployment-847dcfb7fb- deployment-9085 b1ce025a-31e1-494a-8dcc-44b3a4f3c4c0 7544 0 2023-01-24 18:22:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459b570 0xc00459b571}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nqzbj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nqzbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.4,PodIP:,StartTime:2023-01-24 18:22:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:21.855: INFO: Pod "webserver-deployment-847dcfb7fb-x4xs9" is not available: &Pod{ObjectMeta:{webserver-deployment-847dcfb7fb-x4xs9 webserver-deployment-847dcfb7fb- deployment-9085 85c531e7-04ac-4e18-87ca-93a512947765 7598 0 2023-01-24 18:22:20 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:f3e65b57146a2f7f546c0eb178fbd19dc9cc6367b30b266005a990e634dcdfeb cni.projectcalico.org/podIP:192.168.69.244/32 cni.projectcalico.org/podIPs:192.168.69.244/32] [{apps/v1 ReplicaSet webserver-deployment-847dcfb7fb bc49a7b8-b1e7-4c25-823e-d00c7d9e980d 0xc00459b720 0xc00459b721}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:20 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc49a7b8-b1e7-4c25-823e-d00c7d9e980d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5pcmr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5pcmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2023-01-24 18:22:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:22:21.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9085" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":346,"completed":55,"skipped":790,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Deployment�[0m �[1mDeployment should have a working scale subresource [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:22:22.138: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:22:22.663: INFO: Creating simple deployment test-new-deployment Jan 24 18:22:23.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:22:25.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:22:27.192: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:22:29.178: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181342, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-847dcfb7fb\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the deployment Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 24 18:22:31.895: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-1571 3489beeb-179b-49ec-9e27-dbe75c6ebc4a 7979 3 2023-01-24 18:22:22 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-01-24 18:22:22 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002cca7c8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:4,UpdatedReplicas:4,AvailableReplicas:1,UnavailableReplicas:3,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-847dcfb7fb" has successfully progressed.,LastUpdateTime:2023-01-24 18:22:30 +0000 UTC,LastTransitionTime:2023-01-24 18:22:22 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-24 18:22:31 +0000 UTC,LastTransitionTime:2023-01-24 18:22:31 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 24 18:22:31.999: INFO: New ReplicaSet "test-new-deployment-847dcfb7fb" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-847dcfb7fb deployment-1571 f6200730-9340-4587-8845-4d20e07dddba 7978 3 2023-01-24 18:22:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:4 deployment.kubernetes.io/max-replicas:5 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 3489beeb-179b-49ec-9e27-dbe75c6ebc4a 0xc002ccae50 0xc002ccae51}] [] [{kube-controller-manager Update apps/v1 2023-01-24 18:22:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"3489beeb-179b-49ec-9e27-dbe75c6ebc4a\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:22:30 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ccaed8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:4,FullyLabeledReplicas:4,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 24 18:22:32.103: INFO: Pod "test-new-deployment-847dcfb7fb-bwn6d" is available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-bwn6d test-new-deployment-847dcfb7fb- deployment-1571 c375ad55-6b44-4132-aef5-d5db3250649e 7933 0 2023-01-24 18:22:22 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[cni.projectcalico.org/containerID:0355fa415de8e8009d7d8511905211f36448959ebb212470be26bdea2a05af1a cni.projectcalico.org/podIP:192.168.69.248/32 cni.projectcalico.org/podIPs:192.168.69.248/32] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb f6200730-9340-4587-8845-4d20e07dddba 0xc002f2a690 0xc002f2a691}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:22 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6200730-9340-4587-8845-4d20e07dddba\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:22:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:22:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.248\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5npks,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5npks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:22 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.248,StartTime:2023-01-24 18:22:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:22:25 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50,ContainerID:containerd://54704fc006ec8998a7a92f1d662ffafa78b69bcf8abebef85e943e16e62dd4f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.248,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:32.103: INFO: Pod "test-new-deployment-847dcfb7fb-g2r27" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-g2r27 test-new-deployment-847dcfb7fb- deployment-1571 51c268b0-a539-4467-9811-32c190fcfb44 7974 0 2023-01-24 18:22:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb f6200730-9340-4587-8845-4d20e07dddba 0xc002f2aa50 0xc002f2aa51}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6200730-9340-4587-8845-4d20e07dddba\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d28fz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d28fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:32.104: INFO: Pod "test-new-deployment-847dcfb7fb-mb5nb" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-mb5nb test-new-deployment-847dcfb7fb- deployment-1571 098081d4-293a-4f93-ad0e-92e12ce66bff 7983 0 2023-01-24 18:22:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb f6200730-9340-4587-8845-4d20e07dddba 0xc002f2ac50 0xc002f2ac51}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6200730-9340-4587-8845-4d20e07dddba\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-24 18:22:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lw8kv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lw8kv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:,StartTime:2023-01-24 18:22:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:22:32.106: INFO: Pod "test-new-deployment-847dcfb7fb-q8zwf" is not available: &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-q8zwf test-new-deployment-847dcfb7fb- deployment-1571 c4302246-86fe-4bec-a49b-4ba1556b7549 7957 0 2023-01-24 18:22:31 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb f6200730-9340-4587-8845-4d20e07dddba 0xc002f2af30 0xc002f2af31}] [] [{kube-controller-manager Update v1 2023-01-24 18:22:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f6200730-9340-4587-8845-4d20e07dddba\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-x67wc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x67wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-tb56s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:22:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:22:32.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-1571" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":346,"completed":56,"skipped":830,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]�[0m �[1mworks for CRD with validation schema [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:22:32.316: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:22:32.825: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Jan 24 18:22:36.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 create -f -' Jan 24 18:22:38.564: INFO: stderr: "" Jan 24 18:22:38.564: INFO: stdout: "e2e-test-crd-publish-openapi-400-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 24 18:22:38.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 delete e2e-test-crd-publish-openapi-400-crds test-foo' Jan 24 18:22:39.087: INFO: stderr: "" Jan 24 18:22:39.087: INFO: stdout: "e2e-test-crd-publish-openapi-400-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 24 18:22:39.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 apply -f -' Jan 24 18:22:39.942: INFO: stderr: "" Jan 24 18:22:39.942: INFO: stdout: "e2e-test-crd-publish-openapi-400-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 24 18:22:39.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 delete e2e-test-crd-publish-openapi-400-crds test-foo' Jan 24 18:22:40.450: INFO: stderr: "" Jan 24 18:22:40.450: INFO: stdout: "e2e-test-crd-publish-openapi-400-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 24 18:22:40.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 create -f -' Jan 24 18:22:40.971: INFO: rc: 1 Jan 24 18:22:40.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 apply -f -' Jan 24 18:22:41.483: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Jan 24 18:22:41.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 create -f -' Jan 24 18:22:41.992: INFO: rc: 1 Jan 24 18:22:41.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 --namespace=crd-publish-openapi-199 apply -f -' Jan 24 18:22:42.494: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Jan 24 18:22:42.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 explain e2e-test-crd-publish-openapi-400-crds' Jan 24 18:22:43.011: INFO: stderr: "" Jan 24 18:22:43.011: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-400-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Jan 24 18:22:43.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 explain e2e-test-crd-publish-openapi-400-crds.metadata' Jan 24 18:22:43.540: INFO: stderr: "" Jan 24 18:22:43.540: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-400-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 24 18:22:43.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 explain e2e-test-crd-publish-openapi-400-crds.spec' Jan 24 18:22:44.083: INFO: stderr: "" Jan 24 18:22:44.083: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-400-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 24 18:22:44.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 explain e2e-test-crd-publish-openapi-400-crds.spec.bars' Jan 24 18:22:44.609: INFO: stderr: "" Jan 24 18:22:44.609: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-400-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jan 24 18:22:44.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-199 explain e2e-test-crd-publish-openapi-400-crds.spec.bars2' Jan 24 18:22:45.108: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:22:49.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-199" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":346,"completed":57,"skipped":834,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould replace jobs when ReplaceConcurrent [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:22:49.450: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a ReplaceConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring the job is replaced with a new one �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:00.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-8768" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":346,"completed":58,"skipped":867,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected secret�[0m �[1mshould be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:00.815: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-99d85411-796f-41d6-bdee-1fdc70d61298 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:24:01.536: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-42e5b6de-b66b-474d-a022-f486ed258f74" in namespace "projected-7407" to be "Succeeded or Failed" Jan 24 18:24:01.640: INFO: Pod "pod-projected-secrets-42e5b6de-b66b-474d-a022-f486ed258f74": Phase="Pending", Reason="", readiness=false. Elapsed: 104.463664ms Jan 24 18:24:03.744: INFO: Pod "pod-projected-secrets-42e5b6de-b66b-474d-a022-f486ed258f74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208384994s �[1mSTEP�[0m: Saw pod success Jan 24 18:24:03.744: INFO: Pod "pod-projected-secrets-42e5b6de-b66b-474d-a022-f486ed258f74" satisfied condition "Succeeded or Failed" Jan 24 18:24:03.847: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-projected-secrets-42e5b6de-b66b-474d-a022-f486ed258f74 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:24:04.073: INFO: Waiting for pod pod-projected-secrets-42e5b6de-b66b-474d-a022-f486ed258f74 to disappear Jan 24 18:24:04.176: INFO: Pod pod-projected-secrets-42e5b6de-b66b-474d-a022-f486ed258f74 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:04.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7407" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":59,"skipped":909,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl expose�[0m �[1mshould create services for rc [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:04.390: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating Agnhost RC Jan 24 18:24:04.903: INFO: namespace kubectl-3760 Jan 24 18:24:04.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3760 create -f -' Jan 24 18:24:05.973: INFO: stderr: "" Jan 24 18:24:05.973: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 24 18:24:07.078: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 18:24:07.078: INFO: Found 0 / 1 Jan 24 18:24:08.077: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 18:24:08.077: INFO: Found 1 / 1 Jan 24 18:24:08.078: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 24 18:24:08.180: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 18:24:08.180: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 24 18:24:08.180: INFO: wait on agnhost-primary startup in kubectl-3760 Jan 24 18:24:08.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3760 logs agnhost-primary-sngs5 agnhost-primary' Jan 24 18:24:08.692: INFO: stderr: "" Jan 24 18:24:08.692: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 24 18:24:08.692: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3760 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 24 18:24:09.247: INFO: stderr: "" Jan 24 18:24:09.247: INFO: stdout: "service/rm2 exposed\n" Jan 24 18:24:09.349: INFO: Service rm2 in namespace kubectl-3760 found. �[1mSTEP�[0m: exposing service Jan 24 18:24:11.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3760 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 24 18:24:12.069: INFO: stderr: "" Jan 24 18:24:12.069: INFO: stdout: "service/rm3 exposed\n" Jan 24 18:24:12.176: INFO: Service rm3 in namespace kubectl-3760 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:14.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3760" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":346,"completed":60,"skipped":918,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] DNS�[0m �[1mshould support configurable pod DNS nameservers [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:14.596: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 24 18:24:15.213: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-5179 1d61d25d-aba5-4acf-bf2c-9ff8678d170e 8345 0 2023-01-24 18:24:15 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2023-01-24 18:24:15 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-npjjv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-npjjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 24 18:24:15.316: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:24:17.419: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Jan 24 18:24:17.420: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5179 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:24:17.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Jan 24 18:24:18.156: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5179 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:24:18.156: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:24:18.885: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:18.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5179" for this suite. �[32m•�[0m{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":346,"completed":61,"skipped":935,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl patch�[0m �[1mshould add annotations for pods in rc [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:19.212: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating Agnhost RC Jan 24 18:24:19.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9220 create -f -' Jan 24 18:24:20.357: INFO: stderr: "" Jan 24 18:24:20.357: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 24 18:24:21.461: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 18:24:21.461: INFO: Found 0 / 1 Jan 24 18:24:22.461: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 18:24:22.461: INFO: Found 1 / 1 Jan 24 18:24:22.461: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 24 18:24:22.564: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 18:24:22.564: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 24 18:24:22.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9220 patch pod agnhost-primary-4sh7f -p {"metadata":{"annotations":{"x":"y"}}}' Jan 24 18:24:23.093: INFO: stderr: "" Jan 24 18:24:23.093: INFO: stdout: "pod/agnhost-primary-4sh7f patched\n" �[1mSTEP�[0m: checking annotations Jan 24 18:24:23.196: INFO: Selector matched 1 pods for map[app:agnhost] Jan 24 18:24:23.196: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:23.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9220" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":346,"completed":62,"skipped":952,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl replace�[0m �[1mshould update a single-container pod's image [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:23.411: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 Jan 24 18:24:23.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9383 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 24 18:24:24.356: INFO: stderr: "" Jan 24 18:24:24.357: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 24 18:24:29.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9383 get pod e2e-test-httpd-pod -o json' Jan 24 18:24:29.918: INFO: stderr: "" Jan 24 18:24:29.918: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"annotations\": {\n \"cni.projectcalico.org/containerID\": \"5f70e929cfcfecc0f990debaafe83425dc12da94a0319b983541f935adfbbc2d\",\n \"cni.projectcalico.org/podIP\": \"192.168.237.131/32\",\n \"cni.projectcalico.org/podIPs\": \"192.168.237.131/32\"\n },\n \"creationTimestamp\": \"2023-01-24T18:24:24Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-9383\",\n \"resourceVersion\": \"8459\",\n \"uid\": \"96c270c7-d71b-4544-8bd8-46b9147be6bc\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-bdx9v\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"capz-conf-ewh6sx-md-0-tb56s\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-bdx9v\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-24T18:24:24Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-24T18:24:26Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-24T18:24:26Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-24T18:24:24Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://99ddd5b1821a3baa86aba9b8d3b37d25fb3365decf129f1400192c0e1ea17868\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-1\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:b913fa234cc3473cfe16e937d106b455a7609f927f59031c81aca791e2689b50\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-24T18:24:25Z\"\n }\n }\n }\n ],\n \"hostIP\": \"10.1.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.237.131\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.237.131\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-24T18:24:24Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 24 18:24:29.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9383 replace -f -' Jan 24 18:24:30.559: INFO: stderr: "" Jan 24 18:24:30.559: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-1 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jan 24 18:24:30.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9383 delete pods e2e-test-httpd-pod' Jan 24 18:24:32.339: INFO: stderr: "" Jan 24 18:24:32.339: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:32.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9383" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":346,"completed":63,"skipped":962,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:32.549: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 24 18:24:33.165: INFO: Waiting up to 5m0s for pod "pod-b2a2d2a6-2f50-4449-92bb-c52087d38d81" in namespace "emptydir-3848" to be "Succeeded or Failed" Jan 24 18:24:33.268: INFO: Pod "pod-b2a2d2a6-2f50-4449-92bb-c52087d38d81": Phase="Pending", Reason="", readiness=false. Elapsed: 103.156906ms Jan 24 18:24:35.371: INFO: Pod "pod-b2a2d2a6-2f50-4449-92bb-c52087d38d81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206024686s �[1mSTEP�[0m: Saw pod success Jan 24 18:24:35.371: INFO: Pod "pod-b2a2d2a6-2f50-4449-92bb-c52087d38d81" satisfied condition "Succeeded or Failed" Jan 24 18:24:35.473: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-b2a2d2a6-2f50-4449-92bb-c52087d38d81 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:24:35.696: INFO: Waiting for pod pod-b2a2d2a6-2f50-4449-92bb-c52087d38d81 to disappear Jan 24 18:24:35.797: INFO: Pod pod-b2a2d2a6-2f50-4449-92bb-c52087d38d81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:35.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3848" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":64,"skipped":963,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Probing container�[0m �[1mshould be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:36.009: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod liveness-2b215739-1140-467c-aaf4-4ee82fe1d4c7 in namespace container-probe-9114 Jan 24 18:24:38.829: INFO: Started pod liveness-2b215739-1140-467c-aaf4-4ee82fe1d4c7 in namespace container-probe-9114 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 24 18:24:38.932: INFO: Initial restart count of pod liveness-2b215739-1140-467c-aaf4-4ee82fe1d4c7 is 0 Jan 24 18:24:57.975: INFO: Restart count of pod container-probe-9114/liveness-2b215739-1140-467c-aaf4-4ee82fe1d4c7 is now 1 (19.043278382s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:58.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9114" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":65,"skipped":968,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould provide secure master service [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:58.301: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:24:58.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8986" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":346,"completed":66,"skipped":1018,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-scheduling] SchedulerPreemption [Serial]�[0m �[1mvalidates basic preemption works [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:24:59.125: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:90 Jan 24 18:24:59.948: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 18:26:00.828: INFO: Waiting for terminating namespaces to be deleted... [It] validates basic preemption works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create pods that use 4/5 of node resources. Jan 24 18:26:01.199: INFO: Created pod: pod0-0-sched-preemption-low-priority Jan 24 18:26:01.302: INFO: Created pod: pod0-1-sched-preemption-medium-priority Jan 24 18:26:01.515: INFO: Created pod: pod1-0-sched-preemption-medium-priority Jan 24 18:26:01.618: INFO: Created pod: pod1-1-sched-preemption-medium-priority �[1mSTEP�[0m: Wait for pods to be scheduled. �[1mSTEP�[0m: Run a high priority pod that has same requirements as that of lower priority pod [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:26:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-6198" for this suite. [AfterEach] [sig-scheduling] SchedulerPreemption [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:78 �[32m•�[0m{"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":346,"completed":67,"skipped":1019,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould provide container's memory limit [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:26:19.816: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:26:20.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc" in namespace "downward-api-9191" to be "Succeeded or Failed" Jan 24 18:26:20.536: INFO: Pod "downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc": Phase="Pending", Reason="", readiness=false. Elapsed: 102.420338ms Jan 24 18:26:22.640: INFO: Pod "downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc": Phase="Running", Reason="", readiness=true. Elapsed: 2.206080294s Jan 24 18:26:24.746: INFO: Pod "downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.311539835s �[1mSTEP�[0m: Saw pod success Jan 24 18:26:24.746: INFO: Pod "downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc" satisfied condition "Succeeded or Failed" Jan 24 18:26:24.851: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:26:25.080: INFO: Waiting for pod downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc to disappear Jan 24 18:26:25.186: INFO: Pod downwardapi-volume-42fef0a3-247e-4150-a1bd-5db774fd03cc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:26:25.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9191" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":346,"completed":68,"skipped":1029,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Discovery�[0m �[1mshould validate PreferredVersion for each APIGroup [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:26:25.405: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:26:26.558: INFO: Checking APIGroup: apiregistration.k8s.io Jan 24 18:26:26.658: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 24 18:26:26.658: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Jan 24 18:26:26.658: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 24 18:26:26.658: INFO: Checking APIGroup: apps Jan 24 18:26:26.759: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 24 18:26:26.759: INFO: Versions found [{apps/v1 v1}] Jan 24 18:26:26.759: INFO: apps/v1 matches apps/v1 Jan 24 18:26:26.759: INFO: Checking APIGroup: events.k8s.io Jan 24 18:26:26.859: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 24 18:26:26.859: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 24 18:26:26.859: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 24 18:26:26.859: INFO: Checking APIGroup: authentication.k8s.io Jan 24 18:26:26.960: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 24 18:26:26.960: INFO: Versions found [{authentication.k8s.io/v1 v1}] Jan 24 18:26:26.960: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 24 18:26:26.960: INFO: Checking APIGroup: authorization.k8s.io Jan 24 18:26:27.060: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 24 18:26:27.060: INFO: Versions found [{authorization.k8s.io/v1 v1}] Jan 24 18:26:27.060: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 24 18:26:27.060: INFO: Checking APIGroup: autoscaling Jan 24 18:26:27.161: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jan 24 18:26:27.161: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 24 18:26:27.161: INFO: autoscaling/v1 matches autoscaling/v1 Jan 24 18:26:27.161: INFO: Checking APIGroup: batch Jan 24 18:26:27.262: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 24 18:26:27.262: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 24 18:26:27.262: INFO: batch/v1 matches batch/v1 Jan 24 18:26:27.262: INFO: Checking APIGroup: certificates.k8s.io Jan 24 18:26:27.362: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 24 18:26:27.362: INFO: Versions found [{certificates.k8s.io/v1 v1}] Jan 24 18:26:27.362: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 24 18:26:27.362: INFO: Checking APIGroup: networking.k8s.io Jan 24 18:26:27.463: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 24 18:26:27.463: INFO: Versions found [{networking.k8s.io/v1 v1}] Jan 24 18:26:27.463: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 24 18:26:27.463: INFO: Checking APIGroup: policy Jan 24 18:26:27.564: INFO: PreferredVersion.GroupVersion: policy/v1 Jan 24 18:26:27.564: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Jan 24 18:26:27.564: INFO: policy/v1 matches policy/v1 Jan 24 18:26:27.564: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 24 18:26:27.665: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 24 18:26:27.665: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Jan 24 18:26:27.665: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 24 18:26:27.665: INFO: Checking APIGroup: storage.k8s.io Jan 24 18:26:27.766: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 24 18:26:27.766: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 24 18:26:27.766: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 24 18:26:27.766: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 24 18:26:27.867: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 24 18:26:27.867: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Jan 24 18:26:27.867: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 24 18:26:27.867: INFO: Checking APIGroup: apiextensions.k8s.io Jan 24 18:26:27.968: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 24 18:26:27.968: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Jan 24 18:26:27.968: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 24 18:26:27.968: INFO: Checking APIGroup: scheduling.k8s.io Jan 24 18:26:28.069: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 24 18:26:28.069: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Jan 24 18:26:28.069: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 24 18:26:28.069: INFO: Checking APIGroup: coordination.k8s.io Jan 24 18:26:28.169: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 24 18:26:28.170: INFO: Versions found [{coordination.k8s.io/v1 v1}] Jan 24 18:26:28.170: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 24 18:26:28.170: INFO: Checking APIGroup: node.k8s.io Jan 24 18:26:28.270: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 24 18:26:28.270: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 24 18:26:28.270: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 24 18:26:28.270: INFO: Checking APIGroup: discovery.k8s.io Jan 24 18:26:28.371: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Jan 24 18:26:28.371: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Jan 24 18:26:28.371: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Jan 24 18:26:28.371: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 24 18:26:28.472: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jan 24 18:26:28.472: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 24 18:26:28.472: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 Jan 24 18:26:28.472: INFO: Checking APIGroup: crd.projectcalico.org Jan 24 18:26:28.572: INFO: PreferredVersion.GroupVersion: crd.projectcalico.org/v1 Jan 24 18:26:28.573: INFO: Versions found [{crd.projectcalico.org/v1 v1}] Jan 24 18:26:28.573: INFO: crd.projectcalico.org/v1 matches crd.projectcalico.org/v1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:26:28.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-1904" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":346,"completed":69,"skipped":1049,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:26:28.784: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 24 18:26:29.501: INFO: The status of Pod pod-update-activedeadlineseconds-7e836a8d-cc50-4186-9385-469d956964f7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:26:31.603: INFO: The status of Pod pod-update-activedeadlineseconds-7e836a8d-cc50-4186-9385-469d956964f7 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 24 18:26:32.519: INFO: Successfully updated pod "pod-update-activedeadlineseconds-7e836a8d-cc50-4186-9385-469d956964f7" Jan 24 18:26:32.519: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-7e836a8d-cc50-4186-9385-469d956964f7" in namespace "pods-8223" to be "terminated due to deadline exceeded" Jan 24 18:26:32.621: INFO: Pod "pod-update-activedeadlineseconds-7e836a8d-cc50-4186-9385-469d956964f7": Phase="Running", Reason="", readiness=true. Elapsed: 101.975777ms Jan 24 18:26:34.724: INFO: Pod "pod-update-activedeadlineseconds-7e836a8d-cc50-4186-9385-469d956964f7": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 2.204867721s Jan 24 18:26:34.724: INFO: Pod "pod-update-activedeadlineseconds-7e836a8d-cc50-4186-9385-469d956964f7" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:26:34.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8223" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":346,"completed":70,"skipped":1053,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Job�[0m �[1mshould adopt matching orphans and release non-matching pods [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:26:34.938: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Jan 24 18:26:38.471: INFO: Successfully updated pod "adopt-release--1-mx6mr" �[1mSTEP�[0m: Checking that the Job readopts the Pod Jan 24 18:26:38.471: INFO: Waiting up to 15m0s for pod "adopt-release--1-mx6mr" in namespace "job-936" to be "adopted" Jan 24 18:26:38.573: INFO: Pod "adopt-release--1-mx6mr": Phase="Running", Reason="", readiness=true. Elapsed: 102.067816ms Jan 24 18:26:38.573: INFO: Pod "adopt-release--1-mx6mr" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Jan 24 18:26:39.282: INFO: Successfully updated pod "adopt-release--1-mx6mr" �[1mSTEP�[0m: Checking that the Job releases the Pod Jan 24 18:26:39.282: INFO: Waiting up to 15m0s for pod "adopt-release--1-mx6mr" in namespace "job-936" to be "released" Jan 24 18:26:39.384: INFO: Pod "adopt-release--1-mx6mr": Phase="Running", Reason="", readiness=true. Elapsed: 102.103042ms Jan 24 18:26:39.384: INFO: Pod "adopt-release--1-mx6mr" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:26:39.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-936" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":346,"completed":71,"skipped":1122,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Probing container�[0m �[1mshould be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:26:39.598: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod busybox-a822dcff-3f20-4ac9-b34a-0d92cc3333b5 in namespace container-probe-1637 Jan 24 18:26:42.420: INFO: Started pod busybox-a822dcff-3f20-4ac9-b34a-0d92cc3333b5 in namespace container-probe-1637 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 24 18:26:42.522: INFO: Initial restart count of pod busybox-a822dcff-3f20-4ac9-b34a-0d92cc3333b5 is 0 Jan 24 18:27:33.128: INFO: Restart count of pod container-probe-1637/busybox-a822dcff-3f20-4ac9-b34a-0d92cc3333b5 is now 1 (50.605646136s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:27:33.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-1637" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":346,"completed":72,"skipped":1127,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Probing container�[0m �[1mwith readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:27:33.542: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:27:34.266: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:27:36.370: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:38.370: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:40.371: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:42.371: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:44.370: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:46.370: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:48.369: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:50.369: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:52.370: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = false) Jan 24 18:27:54.370: INFO: The status of Pod test-webserver-209a4a3b-7185-413f-9683-9dd7b55bd513 is Running (Ready = true) Jan 24 18:27:54.473: INFO: Container started at 2023-01-24 18:27:35 +0000 UTC, pod became ready at 2023-01-24 18:27:54 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:27:54.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-717" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":346,"completed":73,"skipped":1135,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Downward API�[0m �[1mshould provide host IP as an env var [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:27:54.688: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 24 18:27:55.307: INFO: Waiting up to 5m0s for pod "downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8" in namespace "downward-api-5667" to be "Succeeded or Failed" Jan 24 18:27:55.414: INFO: Pod "downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8": Phase="Pending", Reason="", readiness=false. Elapsed: 107.377395ms Jan 24 18:27:57.518: INFO: Pod "downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8": Phase="Running", Reason="", readiness=true. Elapsed: 2.211095178s Jan 24 18:27:59.622: INFO: Pod "downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.314815894s �[1mSTEP�[0m: Saw pod success Jan 24 18:27:59.622: INFO: Pod "downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8" satisfied condition "Succeeded or Failed" Jan 24 18:27:59.725: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:27:59.955: INFO: Waiting for pod downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8 to disappear Jan 24 18:28:00.058: INFO: Pod downward-api-ee8c9b6c-2190-4454-b69e-1042eb562cc8 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:28:00.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5667" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":346,"completed":74,"skipped":1189,"failed":0} �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]�[0m �[1mworks for multiple CRDs of different groups [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:28:00.270: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: CRs in different groups (two CRDs) show up in OpenAPI documentation Jan 24 18:28:00.781: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:28:05.437: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:28:22.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-4033" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":346,"completed":75,"skipped":1189,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould set mode on item file [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:28:23.066: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:28:23.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-392dffd0-4be7-4602-96e5-3914fe5125c6" in namespace "downward-api-7223" to be "Succeeded or Failed" Jan 24 18:28:23.806: INFO: Pod "downwardapi-volume-392dffd0-4be7-4602-96e5-3914fe5125c6": Phase="Pending", Reason="", readiness=false. Elapsed: 104.817306ms Jan 24 18:28:25.909: INFO: Pod "downwardapi-volume-392dffd0-4be7-4602-96e5-3914fe5125c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.208058902s �[1mSTEP�[0m: Saw pod success Jan 24 18:28:25.909: INFO: Pod "downwardapi-volume-392dffd0-4be7-4602-96e5-3914fe5125c6" satisfied condition "Succeeded or Failed" Jan 24 18:28:26.013: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downwardapi-volume-392dffd0-4be7-4602-96e5-3914fe5125c6 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:28:26.241: INFO: Waiting for pod downwardapi-volume-392dffd0-4be7-4602-96e5-3914fe5125c6 to disappear Jan 24 18:28:26.343: INFO: Pod downwardapi-volume-392dffd0-4be7-4602-96e5-3914fe5125c6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:28:26.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7223" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":76,"skipped":1190,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:28:26.556: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: Gathering metrics Jan 24 18:28:34.132: INFO: The status of Pod kube-controller-manager-capz-conf-ewh6sx-control-plane-pt2q9 is Running (Ready = true) Jan 24 18:28:35.179: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:28:35.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5217" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":346,"completed":77,"skipped":1201,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould allow composing env vars into new env vars [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:28:35.392: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test env composition Jan 24 18:28:36.014: INFO: Waiting up to 5m0s for pod "var-expansion-187eb4c7-a7d0-47b7-95b3-59c5f0b30ea9" in namespace "var-expansion-8599" to be "Succeeded or Failed" Jan 24 18:28:36.117: INFO: Pod "var-expansion-187eb4c7-a7d0-47b7-95b3-59c5f0b30ea9": Phase="Pending", Reason="", readiness=false. Elapsed: 102.921387ms Jan 24 18:28:38.221: INFO: Pod "var-expansion-187eb4c7-a7d0-47b7-95b3-59c5f0b30ea9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206996067s �[1mSTEP�[0m: Saw pod success Jan 24 18:28:38.221: INFO: Pod "var-expansion-187eb4c7-a7d0-47b7-95b3-59c5f0b30ea9" satisfied condition "Succeeded or Failed" Jan 24 18:28:38.325: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod var-expansion-187eb4c7-a7d0-47b7-95b3-59c5f0b30ea9 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:28:38.546: INFO: Waiting for pod var-expansion-187eb4c7-a7d0-47b7-95b3-59c5f0b30ea9 to disappear Jan 24 18:28:38.648: INFO: Pod var-expansion-187eb4c7-a7d0-47b7-95b3-59c5f0b30ea9 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:28:38.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-8599" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":346,"completed":78,"skipped":1207,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Security Context�[0m �[90mWhen creating a pod with privileged�[0m �[1mshould run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:28:38.863: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:28:39.486: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-40efe41f-6917-4f88-b2b4-48ca578cd524" in namespace "security-context-test-7318" to be "Succeeded or Failed" Jan 24 18:28:39.589: INFO: Pod "busybox-privileged-false-40efe41f-6917-4f88-b2b4-48ca578cd524": Phase="Pending", Reason="", readiness=false. Elapsed: 102.665112ms Jan 24 18:28:41.693: INFO: Pod "busybox-privileged-false-40efe41f-6917-4f88-b2b4-48ca578cd524": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206713462s Jan 24 18:28:41.693: INFO: Pod "busybox-privileged-false-40efe41f-6917-4f88-b2b4-48ca578cd524" satisfied condition "Succeeded or Failed" Jan 24 18:28:41.800: INFO: Got logs for pod "busybox-privileged-false-40efe41f-6917-4f88-b2b4-48ca578cd524": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:28:41.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-7318" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":79,"skipped":1218,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Secrets�[0m �[1moptional updates should be reflected in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:28:42.014: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-c98c9e95-1480-4ac0-acd9-3f15d890eb26 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-bc85316b-ac61-4266-9d8a-22b0342bcff0 �[1mSTEP�[0m: Creating the pod Jan 24 18:28:43.090: INFO: The status of Pod pod-secrets-03fc03d7-8234-4320-af1c-68990dba1b61 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:28:45.194: INFO: The status of Pod pod-secrets-03fc03d7-8234-4320-af1c-68990dba1b61 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:28:47.194: INFO: The status of Pod pod-secrets-03fc03d7-8234-4320-af1c-68990dba1b61 is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-c98c9e95-1480-4ac0-acd9-3f15d890eb26 �[1mSTEP�[0m: Updating secret s-test-opt-upd-bc85316b-ac61-4266-9d8a-22b0342bcff0 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-e350bd5d-abb9-4bcf-9e57-0796bd1acc7e �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:30:14.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5745" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":80,"skipped":1270,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir wrapper volumes�[0m �[1mshould not conflict [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:30:14.862: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:30:15.803: INFO: The status of Pod pod-secrets-df4dc4bd-23bd-4893-ad8c-80e5c77588fc is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:30:17.908: INFO: The status of Pod pod-secrets-df4dc4bd-23bd-4893-ad8c-80e5c77588fc is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:30:18.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-162" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":346,"completed":81,"skipped":1271,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mshould list, patch and delete a collection of StatefulSets [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:30:18.550: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-1533 [It] should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:30:19.378: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 18:30:29.483: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: patching the StatefulSet Jan 24 18:30:30.004: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:30:30.004: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 18:30:40.112: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:30:40.112: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Listing all StatefulSets �[1mSTEP�[0m: Delete all of the StatefulSets �[1mSTEP�[0m: Verify that StatefulSets have been deleted [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Jan 24 18:30:40.631: INFO: Deleting all statefulset in ns statefulset-1533 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:30:40.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-1533" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":346,"completed":82,"skipped":1275,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]�[0m �[1mshould be able to convert from CR v1 to CR v2 [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:30:41.165: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:30:42.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181842, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181842, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181842, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181842, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:30:46.128: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:30:46.243: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:30:49.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-6423" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":346,"completed":83,"skipped":1288,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Daemon set [Serial]�[0m �[1mshould rollback without unnecessary restarts [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:30:49.996: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename daemonsets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:142 [It] should rollback without unnecessary restarts [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:30:51.092: INFO: Create a RollingUpdate DaemonSet Jan 24 18:30:51.206: INFO: Check that daemon pods launch on every node of the cluster Jan 24 18:30:51.320: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:30:51.429: INFO: Number of nodes with available pods: 0 Jan 24 18:30:51.429: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:30:52.538: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:30:52.643: INFO: Number of nodes with available pods: 1 Jan 24 18:30:52.643: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:30:53.539: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:30:53.644: INFO: Number of nodes with available pods: 1 Jan 24 18:30:53.644: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:30:54.540: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:30:54.644: INFO: Number of nodes with available pods: 1 Jan 24 18:30:54.644: INFO: Node capz-conf-ewh6sx-md-0-tb56s is running more than one daemon pod Jan 24 18:30:55.541: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:30:55.646: INFO: Number of nodes with available pods: 2 Jan 24 18:30:55.646: INFO: Number of running nodes: 2, number of available pods: 2 Jan 24 18:30:55.646: INFO: Update the DaemonSet to trigger a rollout Jan 24 18:30:55.855: INFO: Updating DaemonSet daemon-set Jan 24 18:30:58.279: INFO: Roll back the DaemonSet before rollout is complete Jan 24 18:30:58.488: INFO: Updating DaemonSet daemon-set Jan 24 18:30:58.488: INFO: Make sure DaemonSet rollback is complete Jan 24 18:30:58.702: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:30:59.916: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node Jan 24 18:31:00.808: INFO: Pod daemon-set-556jp is not available Jan 24 18:31:00.917: INFO: DaemonSet pods can't tolerate node capz-conf-ewh6sx-control-plane-pt2q9 with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:108 �[1mSTEP�[0m: Deleting DaemonSet "daemon-set" �[1mSTEP�[0m: deleting DaemonSet.extensions daemon-set in namespace daemonsets-226, will wait for the garbage collector to delete the pods Jan 24 18:31:01.491: INFO: Deleting DaemonSet.extensions daemon-set took: 112.681544ms Jan 24 18:31:01.591: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.396442ms Jan 24 18:31:04.596: INFO: Number of nodes with available pods: 0 Jan 24 18:31:04.596: INFO: Number of running nodes: 0, number of available pods: 0 Jan 24 18:31:04.699: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"resourceVersion":"10442"},"items":null} Jan 24 18:31:04.801: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"10442"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:31:05.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "daemonsets-226" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":346,"completed":84,"skipped":1301,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould provide container's cpu limit [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:31:05.337: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:31:05.960: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab581ef7-0181-403f-b2ea-d0f10c9aa847" in namespace "downward-api-4986" to be "Succeeded or Failed" Jan 24 18:31:06.062: INFO: Pod "downwardapi-volume-ab581ef7-0181-403f-b2ea-d0f10c9aa847": Phase="Pending", Reason="", readiness=false. Elapsed: 102.851218ms Jan 24 18:31:08.168: INFO: Pod "downwardapi-volume-ab581ef7-0181-403f-b2ea-d0f10c9aa847": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20807994s �[1mSTEP�[0m: Saw pod success Jan 24 18:31:08.168: INFO: Pod "downwardapi-volume-ab581ef7-0181-403f-b2ea-d0f10c9aa847" satisfied condition "Succeeded or Failed" Jan 24 18:31:08.271: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod downwardapi-volume-ab581ef7-0181-403f-b2ea-d0f10c9aa847 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:31:08.500: INFO: Waiting for pod downwardapi-volume-ab581ef7-0181-403f-b2ea-d0f10c9aa847 to disappear Jan 24 18:31:08.603: INFO: Pod downwardapi-volume-ab581ef7-0181-403f-b2ea-d0f10c9aa847 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:31:08.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4986" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":85,"skipped":1305,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mshould be able to deny pod and configmap creation [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:31:08.819: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:31:10.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181870, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181870, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181870, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181870, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:31:13.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be denied by the webhook �[1mSTEP�[0m: create a pod that causes the webhook to hang �[1mSTEP�[0m: create a configmap that should be denied by the webhook �[1mSTEP�[0m: create a configmap that should be admitted by the webhook �[1mSTEP�[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: create a namespace that bypass the webhook �[1mSTEP�[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:31:25.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3133" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3133-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":346,"completed":86,"skipped":1323,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] ConfigMap�[0m �[1mshould be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:31:26.351: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-435e796e-d494-48e8-bb0c-f6968ee60ade �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 24 18:31:27.079: INFO: Waiting up to 5m0s for pod "pod-configmaps-8ba8fab8-2d84-4cf5-9800-87a1faef57eb" in namespace "configmap-3640" to be "Succeeded or Failed" Jan 24 18:31:27.181: INFO: Pod "pod-configmaps-8ba8fab8-2d84-4cf5-9800-87a1faef57eb": Phase="Pending", Reason="", readiness=false. Elapsed: 102.380735ms Jan 24 18:31:29.285: INFO: Pod "pod-configmaps-8ba8fab8-2d84-4cf5-9800-87a1faef57eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206751277s �[1mSTEP�[0m: Saw pod success Jan 24 18:31:29.285: INFO: Pod "pod-configmaps-8ba8fab8-2d84-4cf5-9800-87a1faef57eb" satisfied condition "Succeeded or Failed" Jan 24 18:31:29.388: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-configmaps-8ba8fab8-2d84-4cf5-9800-87a1faef57eb container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:31:29.610: INFO: Waiting for pod pod-configmaps-8ba8fab8-2d84-4cf5-9800-87a1faef57eb to disappear Jan 24 18:31:29.712: INFO: Pod pod-configmaps-8ba8fab8-2d84-4cf5-9800-87a1faef57eb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:31:29.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3640" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":346,"completed":87,"skipped":1392,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Secrets�[0m �[1mshould be immutable if `immutable` field is set [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:31:29.926: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:31:31.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5813" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":346,"completed":88,"skipped":1467,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould support remote command execution over websockets [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:31:31.623: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:31:32.143: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 24 18:31:32.367: INFO: The status of Pod pod-exec-websocket-b9214ebd-59bc-484d-bdcc-76278e3f0b0f is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:31:34.471: INFO: The status of Pod pod-exec-websocket-b9214ebd-59bc-484d-bdcc-76278e3f0b0f is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:31:34.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-268" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":346,"completed":89,"skipped":1468,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] DNS�[0m �[1mshould provide DNS for pods for Hostname [LinuxOnly] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:31:35.204: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2644.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2644.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2644.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2644.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2644.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 24 18:31:59.079: INFO: DNS probes using dns-2644/dns-test-d3184acd-8fcb-4d3e-ba4c-f754f1fbb825 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:31:59.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2644" for this suite. �[32m•�[0m{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":346,"completed":90,"skipped":1469,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould serve multiport endpoints from pods [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:31:59.532: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-7180 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-7180 to expose endpoints map[] Jan 24 18:32:00.473: INFO: successfully validated that service multi-endpoint-test in namespace services-7180 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-7180 Jan 24 18:32:00.683: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:32:02.787: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-7180 to expose endpoints map[pod1:[100]] Jan 24 18:32:03.301: INFO: successfully validated that service multi-endpoint-test in namespace services-7180 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-7180 Jan 24 18:32:03.510: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:32:05.614: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-7180 to expose endpoints map[pod1:[100] pod2:[101]] Jan 24 18:32:06.232: INFO: successfully validated that service multi-endpoint-test in namespace services-7180 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Jan 24 18:32:06.232: INFO: Creating new exec pod Jan 24 18:32:09.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7180 exec execpods4h9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 24 18:32:10.695: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\n+ echo hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 24 18:32:10.695: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:32:10.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7180 exec execpods4h9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.97.226.70 80' Jan 24 18:32:11.829: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.97.226.70 80\nConnection to 10.97.226.70 80 port [tcp/http] succeeded!\n" Jan 24 18:32:11.829: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:32:11.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7180 exec execpods4h9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Jan 24 18:32:12.955: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 81\n+ echo hostName\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Jan 24 18:32:12.955: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:32:12.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7180 exec execpods4h9q -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.97.226.70 81' Jan 24 18:32:14.070: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.97.226.70 81\nConnection to 10.97.226.70 81 port [tcp/*] succeeded!\n" Jan 24 18:32:14.070: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-7180 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-7180 to expose endpoints map[pod2:[101]] Jan 24 18:32:14.599: INFO: successfully validated that service multi-endpoint-test in namespace services-7180 exposes endpoints map[pod2:[101]] �[1mSTEP�[0m: Deleting pod pod2 in namespace services-7180 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-7180 to expose endpoints map[] Jan 24 18:32:15.029: INFO: successfully validated that service multi-endpoint-test in namespace services-7180 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:32:15.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7180" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":346,"completed":91,"skipped":1473,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected secret�[0m �[1mshould be consumable in multiple volumes in a pod [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:32:15.358: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name projected-secret-test-916cafd5-23a8-4738-bee7-0f08b12f789a �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:32:16.089: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9438af4d-6ed9-483e-8ffb-db74299b1e0c" in namespace "projected-7393" to be "Succeeded or Failed" Jan 24 18:32:16.192: INFO: Pod "pod-projected-secrets-9438af4d-6ed9-483e-8ffb-db74299b1e0c": Phase="Pending", Reason="", readiness=false. Elapsed: 102.943728ms Jan 24 18:32:18.297: INFO: Pod "pod-projected-secrets-9438af4d-6ed9-483e-8ffb-db74299b1e0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207176133s �[1mSTEP�[0m: Saw pod success Jan 24 18:32:18.297: INFO: Pod "pod-projected-secrets-9438af4d-6ed9-483e-8ffb-db74299b1e0c" satisfied condition "Succeeded or Failed" Jan 24 18:32:18.401: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-projected-secrets-9438af4d-6ed9-483e-8ffb-db74299b1e0c container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:32:18.619: INFO: Waiting for pod pod-projected-secrets-9438af4d-6ed9-483e-8ffb-db74299b1e0c to disappear Jan 24 18:32:18.721: INFO: Pod pod-projected-secrets-9438af4d-6ed9-483e-8ffb-db74299b1e0c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:32:18.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7393" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":346,"completed":92,"skipped":1487,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] Deployment�[0m �[1mdeployment should support rollover [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:32:18.936: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:32:19.657: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 24 18:32:21.868: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 24 18:32:23.976: INFO: Creating deployment "test-rollover-deployment" Jan 24 18:32:24.198: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 24 18:32:24.301: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 24 18:32:24.507: INFO: Ensure that both replica sets have 1 created replica Jan 24 18:32:24.715: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 24 18:32:24.924: INFO: Updating deployment test-rollover-deployment Jan 24 18:32:24.924: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 24 18:32:25.032: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 24 18:32:25.239: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 24 18:32:25.447: INFO: all replica sets need to contain the pod-template-hash label Jan 24 18:32:25.447: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:32:27.655: INFO: all replica sets need to contain the pod-template-hash label Jan 24 18:32:27.655: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:32:29.656: INFO: all replica sets need to contain the pod-template-hash label Jan 24 18:32:29.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181947, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:32:31.656: INFO: all replica sets need to contain the pod-template-hash label Jan 24 18:32:31.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181947, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:32:33.664: INFO: all replica sets need to contain the pod-template-hash label Jan 24 18:32:33.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181947, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:32:35.656: INFO: all replica sets need to contain the pod-template-hash label Jan 24 18:32:35.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181947, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:32:37.662: INFO: all replica sets need to contain the pod-template-hash label Jan 24 18:32:37.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181947, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810181944, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-98c5f4599\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 24 18:32:39.655: INFO: Jan 24 18:32:39.655: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 24 18:32:39.965: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9817 915d344b-f582-4ef1-8720-07497dcd44f0 11114 2 2023-01-24 18:32:24 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-24 18:32:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:32:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003405968 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-24 18:32:24 +0000 UTC,LastTransitionTime:2023-01-24 18:32:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-98c5f4599" has successfully progressed.,LastUpdateTime:2023-01-24 18:32:37 +0000 UTC,LastTransitionTime:2023-01-24 18:32:24 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 24 18:32:40.069: INFO: New ReplicaSet "test-rollover-deployment-98c5f4599" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-98c5f4599 deployment-9817 21963782-4e94-4089-99d0-050af5c9cdf7 11104 2 2023-01-24 18:32:24 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:98c5f4599] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 915d344b-f582-4ef1-8720-07497dcd44f0 0xc00635f750 0xc00635f751}] [] [{kube-controller-manager Update apps/v1 2023-01-24 18:32:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915d344b-f582-4ef1-8720-07497dcd44f0\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:32:37 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 98c5f4599,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:98c5f4599] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.32 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00635f7e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 24 18:32:40.070: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 24 18:32:40.070: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9817 ed20cd2c-afad-43b1-ad0a-73d3234d00b4 11113 2 2023-01-24 18:32:19 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 915d344b-f582-4ef1-8720-07497dcd44f0 0xc00635f507 0xc00635f508}] [] [{e2e.test Update apps/v1 2023-01-24 18:32:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:32:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915d344b-f582-4ef1-8720-07497dcd44f0\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:32:37 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00635f5c8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 18:32:40.070: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-9817 f29209d7-8891-433d-b640-ad26073501ff 11064 2 2023-01-24 18:32:24 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 915d344b-f582-4ef1-8720-07497dcd44f0 0xc00635f637 0xc00635f638}] [] [{kube-controller-manager Update apps/v1 2023-01-24 18:32:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"915d344b-f582-4ef1-8720-07497dcd44f0\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-24 18:32:24 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00635f6e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 24 18:32:40.174: INFO: Pod "test-rollover-deployment-98c5f4599-bhkdp" is available: &Pod{ObjectMeta:{test-rollover-deployment-98c5f4599-bhkdp test-rollover-deployment-98c5f4599- deployment-9817 0cb3bc82-6a9f-4e7e-9110-44eb7a363ea2 11087 0 2023-01-24 18:32:24 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:98c5f4599] map[cni.projectcalico.org/containerID:6936d71756aff8fb87f613cec5fb1c72b48abc49c6e57c2aa7c1c22fc46b563a cni.projectcalico.org/podIP:192.168.69.219/32 cni.projectcalico.org/podIPs:192.168.69.219/32] [{apps/v1 ReplicaSet test-rollover-deployment-98c5f4599 21963782-4e94-4089-99d0-050af5c9cdf7 0xc003405e20 0xc003405e21}] [] [{kube-controller-manager Update v1 2023-01-24 18:32:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"21963782-4e94-4089-99d0-050af5c9cdf7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {calico Update v1 2023-01-24 18:32:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:cni.projectcalico.org/containerID":{},"f:cni.projectcalico.org/podIP":{},"f:cni.projectcalico.org/podIPs":{}}}} status} {kubelet Update v1 2023-01-24 18:32:27 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.69.219\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-7fzts,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fzts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:capz-conf-ewh6sx-md-0-xf5qq,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:32:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:32:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:32:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-24 18:32:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.1.0.5,PodIP:192.168.69.219,StartTime:2023-01-24 18:32:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-24 18:32:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:128,Signal:0,Reason:StartError,Message:failed to create containerd task: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:430: updating the spec state caused: invalid state transition from stopped to paused: unknown,StartedAt:1970-01-01 00:00:00 +0000 UTC,FinishedAt:2023-01-24 18:32:26 +0000 UTC,ContainerID:containerd://87c6407469c93a85d57ee4cbbc1d06a068469568a5a30610886968fcbcb874da,},},Ready:true,RestartCount:1,Image:k8s.gcr.io/e2e-test-images/agnhost:2.32,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1,ContainerID:containerd://aa5c547a228b16ba0c7abc34beef295d6b24ed56a04a882c8271b744483ac336,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.69.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:32:40.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9817" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":346,"completed":93,"skipped":1507,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Probing container�[0m �[1mshould *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:32:40.388: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod test-webserver-7b47be84-84f4-4cc2-93c7-a4c93dcbc4fc in namespace container-probe-7062 Jan 24 18:32:43.220: INFO: Started pod test-webserver-7b47be84-84f4-4cc2-93c7-a4c93dcbc4fc in namespace container-probe-7062 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 24 18:32:43.324: INFO: Initial restart count of pod test-webserver-7b47be84-84f4-4cc2-93c7-a4c93dcbc4fc is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:36:45.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7062" for this suite. �[32m• [SLOW TEST:245.308 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":346,"completed":94,"skipped":1526,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] ResourceQuota�[0m �[1mshould be able to update and delete ResourceQuota. [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:36:45.696: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:36:46.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-6905" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":346,"completed":95,"skipped":1542,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mshould be able to deny custom resource creation, update and deletion [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:36:47.056: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:36:48.812: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182208, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182208, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182208, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182208, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:36:52.044: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:36:52.148: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be denied by the webhook �[1mSTEP�[0m: Creating a custom resource whose deletion would be denied by the webhook �[1mSTEP�[0m: Updating the custom resource with disallowed data should be denied �[1mSTEP�[0m: Deleting the custom resource should be denied �[1mSTEP�[0m: Remove the offending key and value from the custom resource data �[1mSTEP�[0m: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:36:55.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8166" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8166-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":346,"completed":96,"skipped":1584,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] ReplicaSet�[0m �[1mReplicaset should have a working scale subresource [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:36:56.615: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating replica set "test-rs" that asks for more than the allowed pod quota Jan 24 18:36:57.349: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the replicaset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:37:00.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-529" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":346,"completed":97,"skipped":1606,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan pods created by rc if delete options say so [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:37:00.312: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 24 18:37:41.896: INFO: The status of Pod kube-controller-manager-capz-conf-ewh6sx-control-plane-pt2q9 is Running (Ready = true) Jan 24 18:37:42.988: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 24 18:37:42.988: INFO: Deleting pod "simpletest.rc-2shhf" in namespace "gc-1974" Jan 24 18:37:43.101: INFO: Deleting pod "simpletest.rc-5whqh" in namespace "gc-1974" Jan 24 18:37:43.217: INFO: Deleting pod "simpletest.rc-6nxdb" in namespace "gc-1974" Jan 24 18:37:43.328: INFO: Deleting pod "simpletest.rc-84q99" in namespace "gc-1974" Jan 24 18:37:43.443: INFO: Deleting pod "simpletest.rc-8fn4v" in namespace "gc-1974" Jan 24 18:37:43.556: INFO: Deleting pod "simpletest.rc-d6hw9" in namespace "gc-1974" Jan 24 18:37:43.689: INFO: Deleting pod "simpletest.rc-gfgsq" in namespace "gc-1974" Jan 24 18:37:43.816: INFO: Deleting pod "simpletest.rc-nxz8x" in namespace "gc-1974" Jan 24 18:37:43.962: INFO: Deleting pod "simpletest.rc-t45mj" in namespace "gc-1974" Jan 24 18:37:44.079: INFO: Deleting pod "simpletest.rc-z7qh4" in namespace "gc-1974" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:37:44.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1974" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":346,"completed":98,"skipped":1687,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Namespaces [Serial]�[0m �[1mshould ensure that all pods are removed when a namespace is deleted [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:37:44.408: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename namespaces �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test namespace �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Creating a pod in the namespace �[1mSTEP�[0m: Waiting for the pod to have running status �[1mSTEP�[0m: Deleting the namespace �[1mSTEP�[0m: Waiting for the namespace to be removed. �[1mSTEP�[0m: Recreating the namespace �[1mSTEP�[0m: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:37:59.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "namespaces-6722" for this suite. �[1mSTEP�[0m: Destroying namespace "nsdeletetest-3984" for this suite. Jan 24 18:37:59.705: INFO: Namespace nsdeletetest-3984 was already deleted �[1mSTEP�[0m: Destroying namespace "nsdeletetest-876" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":346,"completed":99,"skipped":1688,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Secrets�[0m �[1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:37:59.808: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-map-35914049-88d7-45e1-9735-dabb77cfce48 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:38:00.536: INFO: Waiting up to 5m0s for pod "pod-secrets-28f13d3b-eea8-44c1-b88d-2a7439afff59" in namespace "secrets-8777" to be "Succeeded or Failed" Jan 24 18:38:00.639: INFO: Pod "pod-secrets-28f13d3b-eea8-44c1-b88d-2a7439afff59": Phase="Pending", Reason="", readiness=false. Elapsed: 102.597016ms Jan 24 18:38:02.743: INFO: Pod "pod-secrets-28f13d3b-eea8-44c1-b88d-2a7439afff59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206915135s �[1mSTEP�[0m: Saw pod success Jan 24 18:38:02.743: INFO: Pod "pod-secrets-28f13d3b-eea8-44c1-b88d-2a7439afff59" satisfied condition "Succeeded or Failed" Jan 24 18:38:02.847: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-secrets-28f13d3b-eea8-44c1-b88d-2a7439afff59 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:38:03.074: INFO: Waiting for pod pod-secrets-28f13d3b-eea8-44c1-b88d-2a7439afff59 to disappear Jan 24 18:38:03.176: INFO: Pod pod-secrets-28f13d3b-eea8-44c1-b88d-2a7439afff59 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:38:03.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8777" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":100,"skipped":1715,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:38:03.390: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: Gathering metrics Jan 24 18:38:15.928: INFO: The status of Pod kube-controller-manager-capz-conf-ewh6sx-control-plane-pt2q9 is Running (Ready = true) Jan 24 18:38:16.989: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 24 18:38:16.989: INFO: Deleting pod "simpletest-rc-to-be-deleted-4msl2" in namespace "gc-9983" Jan 24 18:38:17.106: INFO: Deleting pod "simpletest-rc-to-be-deleted-5cjsb" in namespace "gc-9983" Jan 24 18:38:17.224: INFO: Deleting pod "simpletest-rc-to-be-deleted-9h48w" in namespace "gc-9983" Jan 24 18:38:17.338: INFO: Deleting pod "simpletest-rc-to-be-deleted-b6mqk" in namespace "gc-9983" Jan 24 18:38:17.450: INFO: Deleting pod "simpletest-rc-to-be-deleted-h7tng" in namespace "gc-9983" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:38:17.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-9983" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":346,"completed":101,"skipped":1724,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected secret�[0m �[1mshould be consumable from pods in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:38:17.780: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-0c31282e-e2cd-4289-81f3-b48c1aed86e9 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:38:18.506: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d2207e71-8845-4d22-9b91-5434c8f01c2d" in namespace "projected-2002" to be "Succeeded or Failed" Jan 24 18:38:18.608: INFO: Pod "pod-projected-secrets-d2207e71-8845-4d22-9b91-5434c8f01c2d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.654375ms Jan 24 18:38:20.713: INFO: Pod "pod-projected-secrets-d2207e71-8845-4d22-9b91-5434c8f01c2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206870589s �[1mSTEP�[0m: Saw pod success Jan 24 18:38:20.713: INFO: Pod "pod-projected-secrets-d2207e71-8845-4d22-9b91-5434c8f01c2d" satisfied condition "Succeeded or Failed" Jan 24 18:38:20.816: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-projected-secrets-d2207e71-8845-4d22-9b91-5434c8f01c2d container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:38:21.050: INFO: Waiting for pod pod-projected-secrets-d2207e71-8845-4d22-9b91-5434c8f01c2d to disappear Jan 24 18:38:21.153: INFO: Pod pod-projected-secrets-d2207e71-8845-4d22-9b91-5434c8f01c2d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:38:21.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2002" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":346,"completed":102,"skipped":1731,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Container Runtime�[0m �[90mblackbox test�[0m �[0mon terminated container�[0m �[1mshould report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:38:21.367: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 24 18:38:23.304: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:38:23.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-5712" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":346,"completed":103,"skipped":1737,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:38:23.746: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-2471 Jan 24 18:38:24.469: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:38:26.572: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jan 24 18:38:26.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2471 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 24 18:38:28.334: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 24 18:38:28.334: INFO: stdout: "iptables" Jan 24 18:38:28.334: INFO: proxyMode: iptables Jan 24 18:38:28.444: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 24 18:38:28.546: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-clusterip-timeout in namespace services-2471 �[1mSTEP�[0m: creating replication controller affinity-clusterip-timeout in namespace services-2471 I0124 18:38:28.771930 14 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-2471, replica count: 3 I0124 18:38:31.922899 14 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 18:38:32.135: INFO: Creating new exec pod Jan 24 18:38:35.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2471 exec execpod-affinityk74vq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Jan 24 18:38:36.604: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Jan 24 18:38:36.604: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:38:36.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2471 exec execpod-affinityk74vq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.111.179.173 80' Jan 24 18:38:37.735: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.111.179.173 80\nConnection to 10.111.179.173 80 port [tcp/http] succeeded!\n" Jan 24 18:38:37.735: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 24 18:38:37.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2471 exec execpod-affinityk74vq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.111.179.173:80/ ; done' Jan 24 18:38:38.925: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n" Jan 24 18:38:38.925: INFO: stdout: "\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b\naffinity-clusterip-timeout-2fj9b" Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Received response from host: affinity-clusterip-timeout-2fj9b Jan 24 18:38:38.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2471 exec execpod-affinityk74vq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.179.173:80/' Jan 24 18:38:40.055: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n" Jan 24 18:38:40.055: INFO: stdout: "affinity-clusterip-timeout-2fj9b" Jan 24 18:39:00.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2471 exec execpod-affinityk74vq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.179.173:80/' Jan 24 18:39:01.208: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n" Jan 24 18:39:01.208: INFO: stdout: "affinity-clusterip-timeout-2fj9b" Jan 24 18:39:21.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2471 exec execpod-affinityk74vq -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.111.179.173:80/' Jan 24 18:39:22.344: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.111.179.173:80/\n" Jan 24 18:39:22.344: INFO: stdout: "affinity-clusterip-timeout-949n6" Jan 24 18:39:22.344: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-timeout in namespace services-2471, will wait for the garbage collector to delete the pods Jan 24 18:39:22.920: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 105.279393ms Jan 24 18:39:23.021: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.855919ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:39:25.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2471" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":346,"completed":104,"skipped":1741,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl version�[0m �[1mshould check is all data is printed [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:39:26.071: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:39:26.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-814 version' Jan 24 18:39:27.056: INFO: stderr: "" Jan 24 18:39:27.056: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.1\", GitCommit:\"632ed300f2c34f6d6d15ca4cef3d3c7073412212\", GitTreeState:\"clean\", BuildDate:\"2021-08-19T15:45:37Z\", GoVersion:\"go1.16.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22\", GitVersion:\"v1.22.1\", GitCommit:\"632ed300f2c34f6d6d15ca4cef3d3c7073412212\", GitTreeState:\"clean\", BuildDate:\"2021-08-19T15:39:34Z\", GoVersion:\"go1.16.7\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:39:27.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-814" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":346,"completed":105,"skipped":1742,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]�[0m �[1mworks for multiple CRDs of same group and version but different kinds [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:39:27.271: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 24 18:39:27.789: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:39:31.927: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:39:49.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7414" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":346,"completed":106,"skipped":1779,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected configMap�[0m �[1mshould be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:39:49.447: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-2c9c3b30-4f97-4d18-93b0-bdd63b7372de �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 24 18:39:50.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4" in namespace "projected-3585" to be "Succeeded or Failed" Jan 24 18:39:50.278: INFO: Pod "pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4": Phase="Pending", Reason="", readiness=false. Elapsed: 102.521754ms Jan 24 18:39:52.383: INFO: Pod "pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207353082s Jan 24 18:39:54.492: INFO: Pod "pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31642165s �[1mSTEP�[0m: Saw pod success Jan 24 18:39:54.492: INFO: Pod "pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4" satisfied condition "Succeeded or Failed" Jan 24 18:39:54.597: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:39:54.825: INFO: Waiting for pod pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4 to disappear Jan 24 18:39:54.927: INFO: Pod pod-projected-configmaps-925b90f7-5b5c-433e-ab7d-c9a03ddc29b4 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:39:54.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3585" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":107,"skipped":1797,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mvolume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:39:55.142: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Jan 24 18:39:55.767: INFO: Waiting up to 5m0s for pod "pod-83525208-f604-438a-a2ff-a7956b7d6101" in namespace "emptydir-7747" to be "Succeeded or Failed" Jan 24 18:39:55.870: INFO: Pod "pod-83525208-f604-438a-a2ff-a7956b7d6101": Phase="Pending", Reason="", readiness=false. Elapsed: 103.020008ms Jan 24 18:39:57.977: INFO: Pod "pod-83525208-f604-438a-a2ff-a7956b7d6101": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.21015651s �[1mSTEP�[0m: Saw pod success Jan 24 18:39:57.977: INFO: Pod "pod-83525208-f604-438a-a2ff-a7956b7d6101" satisfied condition "Succeeded or Failed" Jan 24 18:39:58.081: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-83525208-f604-438a-a2ff-a7956b7d6101 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:39:58.304: INFO: Waiting for pod pod-83525208-f604-438a-a2ff-a7956b7d6101 to disappear Jan 24 18:39:58.407: INFO: Pod pod-83525208-f604-438a-a2ff-a7956b7d6101 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:39:58.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7747" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":108,"skipped":1850,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mshould not be able to mutate or prevent deletion of webhook configuration objects [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:39:58.620: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:40:00.523: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182400, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182400, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182400, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810182400, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:40:03.742: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:40:05.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1696" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1696-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":346,"completed":109,"skipped":1871,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Container Lifecycle Hook�[0m �[90mwhen create a pod with lifecycle hook�[0m �[1mshould execute prestop exec hook properly [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:40:05.856: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 24 18:40:06.584: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:40:08.688: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 24 18:40:08.998: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:40:11.103: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 24 18:40:11.315: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 24 18:40:11.421: INFO: Pod pod-with-prestop-exec-hook still exists Jan 24 18:40:13.421: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 24 18:40:13.525: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:40:13.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5154" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":346,"completed":110,"skipped":1914,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Probing container�[0m �[1mshould have monotonically increasing restart count [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:40:13.849: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 in namespace container-probe-8761 Jan 24 18:40:16.682: INFO: Started pod liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 in namespace container-probe-8761 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 24 18:40:16.785: INFO: Initial restart count of pod liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 is 0 Jan 24 18:40:35.862: INFO: Restart count of pod container-probe-8761/liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 is now 1 (19.076611813s elapsed) Jan 24 18:40:56.908: INFO: Restart count of pod container-probe-8761/liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 is now 2 (40.122716679s elapsed) Jan 24 18:41:15.852: INFO: Restart count of pod container-probe-8761/liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 is now 3 (59.066750515s elapsed) Jan 24 18:41:36.898: INFO: Restart count of pod container-probe-8761/liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 is now 4 (1m20.112756044s elapsed) Jan 24 18:42:35.862: INFO: Restart count of pod container-probe-8761/liveness-ded1b124-bfd9-4788-83a5-7e6e8d396a90 is now 5 (2m19.077078339s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:42:35.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-8761" for this suite. �[32m• [SLOW TEST:142.341 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should have monotonically increasing restart count [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":346,"completed":111,"skipped":1932,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected configMap�[0m �[1mupdates should be reflected in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:42:36.191: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-bec61b38-abf4-406f-ba2e-4587d605fe94 �[1mSTEP�[0m: Creating the pod Jan 24 18:42:37.126: INFO: The status of Pod pod-projected-configmaps-93902eff-098c-4edb-99bf-e09141357dd7 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:42:39.235: INFO: The status of Pod pod-projected-configmaps-93902eff-098c-4edb-99bf-e09141357dd7 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-bec61b38-abf4-406f-ba2e-4587d605fe94 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:42:41.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7852" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":112,"skipped":1948,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Secrets�[0m �[1mshould be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:42:41.977: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-3a6d0faf-cb7b-4530-a157-052d3090691a �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:42:42.704: INFO: Waiting up to 5m0s for pod "pod-secrets-85d4318b-fc75-4f00-9391-8a2447186f7b" in namespace "secrets-2627" to be "Succeeded or Failed" Jan 24 18:42:42.806: INFO: Pod "pod-secrets-85d4318b-fc75-4f00-9391-8a2447186f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 102.350826ms Jan 24 18:42:44.910: INFO: Pod "pod-secrets-85d4318b-fc75-4f00-9391-8a2447186f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206574684s �[1mSTEP�[0m: Saw pod success Jan 24 18:42:44.910: INFO: Pod "pod-secrets-85d4318b-fc75-4f00-9391-8a2447186f7b" satisfied condition "Succeeded or Failed" Jan 24 18:42:45.014: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-secrets-85d4318b-fc75-4f00-9391-8a2447186f7b container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:42:45.241: INFO: Waiting for pod pod-secrets-85d4318b-fc75-4f00-9391-8a2447186f7b to disappear Jan 24 18:42:45.343: INFO: Pod pod-secrets-85d4318b-fc75-4f00-9391-8a2447186f7b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:42:45.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2627" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":113,"skipped":1960,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould update annotations on modification [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:42:45.557: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating the pod Jan 24 18:42:46.286: INFO: The status of Pod annotationupdatecebfbe59-8a51-4da1-b25d-c6cdfffba2b2 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:42:48.390: INFO: The status of Pod annotationupdatecebfbe59-8a51-4da1-b25d-c6cdfffba2b2 is Running (Ready = true) Jan 24 18:42:49.309: INFO: Successfully updated pod "annotationupdatecebfbe59-8a51-4da1-b25d-c6cdfffba2b2" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:42:51.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4579" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":346,"completed":114,"skipped":1980,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] ConfigMap�[0m �[1mshould be consumable from pods in volume as non-root [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:42:51.736: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-afacc925-d4ba-4c4d-9cd3-75c63dff6c88 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 24 18:42:52.461: INFO: Waiting up to 5m0s for pod "pod-configmaps-9610187a-62b4-427b-9eeb-724e9adc268a" in namespace "configmap-3508" to be "Succeeded or Failed" Jan 24 18:42:52.564: INFO: Pod "pod-configmaps-9610187a-62b4-427b-9eeb-724e9adc268a": Phase="Pending", Reason="", readiness=false. Elapsed: 103.073888ms Jan 24 18:42:54.671: INFO: Pod "pod-configmaps-9610187a-62b4-427b-9eeb-724e9adc268a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209868191s �[1mSTEP�[0m: Saw pod success Jan 24 18:42:54.671: INFO: Pod "pod-configmaps-9610187a-62b4-427b-9eeb-724e9adc268a" satisfied condition "Succeeded or Failed" Jan 24 18:42:54.776: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-configmaps-9610187a-62b4-427b-9eeb-724e9adc268a container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:42:54.994: INFO: Waiting for pod pod-configmaps-9610187a-62b4-427b-9eeb-724e9adc268a to disappear Jan 24 18:42:55.097: INFO: Pod pod-configmaps-9610187a-62b4-427b-9eeb-724e9adc268a no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:42:55.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3508" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":115,"skipped":1988,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Garbage collector�[0m �[1mshould orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:42:55.310: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Jan 24 18:42:57.185: INFO: The status of Pod kube-controller-manager-capz-conf-ewh6sx-control-plane-pt2q9 is Running (Ready = true) Jan 24 18:42:58.254: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:42:58.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-7917" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":346,"completed":116,"skipped":1990,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:42:58.469: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 24 18:42:59.090: INFO: Waiting up to 5m0s for pod "pod-b9bf5d71-399e-420f-9fd0-597348b4268c" in namespace "emptydir-356" to be "Succeeded or Failed" Jan 24 18:42:59.192: INFO: Pod "pod-b9bf5d71-399e-420f-9fd0-597348b4268c": Phase="Pending", Reason="", readiness=false. Elapsed: 102.599636ms Jan 24 18:43:01.297: INFO: Pod "pod-b9bf5d71-399e-420f-9fd0-597348b4268c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206972785s �[1mSTEP�[0m: Saw pod success Jan 24 18:43:01.297: INFO: Pod "pod-b9bf5d71-399e-420f-9fd0-597348b4268c" satisfied condition "Succeeded or Failed" Jan 24 18:43:01.414: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-b9bf5d71-399e-420f-9fd0-597348b4268c container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:43:01.633: INFO: Waiting for pod pod-b9bf5d71-399e-420f-9fd0-597348b4268c to disappear Jan 24 18:43:01.736: INFO: Pod pod-b9bf5d71-399e-420f-9fd0-597348b4268c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:43:01.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-356" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":117,"skipped":2013,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] ReplicationController�[0m �[1mshould release no longer matching pods [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:43:01.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Given a ReplicationController is created �[1mSTEP�[0m: When the matched label of one of its pods change Jan 24 18:43:02.681: INFO: Pod name pod-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:43:02.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-2394" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":346,"completed":118,"skipped":2044,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected downwardAPI�[0m �[1mshould set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:43:03.205: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:43:03.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee" in namespace "projected-9415" to be "Succeeded or Failed" Jan 24 18:43:03.939: INFO: Pod "downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee": Phase="Pending", Reason="", readiness=false. Elapsed: 103.070043ms Jan 24 18:43:06.044: INFO: Pod "downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20845161s Jan 24 18:43:08.148: INFO: Pod "downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312867765s �[1mSTEP�[0m: Saw pod success Jan 24 18:43:08.148: INFO: Pod "downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee" satisfied condition "Succeeded or Failed" Jan 24 18:43:08.252: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:43:08.470: INFO: Waiting for pod downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee to disappear Jan 24 18:43:08.573: INFO: Pod downwardapi-volume-72f31b9d-958a-4714-9749-20894f698eee no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:43:08.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9415" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":119,"skipped":2058,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mshould perform canary updates and phased rolling updates of template modifications [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:43:08.788: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-425 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a new StatefulSet Jan 24 18:43:09.620: INFO: Found 1 stateful pods, waiting for 3 Jan 24 18:43:19.728: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:43:19.728: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:43:19.728: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jan 24 18:43:20.260: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Jan 24 18:43:20.685: INFO: Updating stateful set ss2 Jan 24 18:43:20.893: INFO: Waiting for Pod statefulset-425/ss2-2 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Jan 24 18:43:31.445: INFO: Found 2 stateful pods, waiting for 3 Jan 24 18:43:41.552: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:43:41.552: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:43:41.552: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 18:43:51.552: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:43:51.552: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:43:51.552: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Jan 24 18:43:51.981: INFO: Updating stateful set ss2 Jan 24 18:43:52.220: INFO: Waiting for Pod statefulset-425/ss2-1 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 Jan 24 18:44:02.648: INFO: Updating stateful set ss2 Jan 24 18:44:02.855: INFO: Waiting for StatefulSet statefulset-425/ss2 to complete update Jan 24 18:44:02.855: INFO: Waiting for Pod statefulset-425/ss2-0 to have revision ss2-5bbbc9fc94 update revision ss2-677d6db895 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Jan 24 18:44:13.065: INFO: Deleting all statefulset in ns statefulset-425 Jan 24 18:44:13.167: INFO: Scaling statefulset ss2 to 0 Jan 24 18:44:23.583: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 18:44:23.687: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:44:23.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-425" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":346,"completed":120,"skipped":2102,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected secret�[0m �[1mshould be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:44:24.214: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-45858127-2548-441d-9dda-7f3049b94ecd �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:44:24.942: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-91e07749-4125-41da-81bb-7a26e7aeebe9" in namespace "projected-3884" to be "Succeeded or Failed" Jan 24 18:44:25.045: INFO: Pod "pod-projected-secrets-91e07749-4125-41da-81bb-7a26e7aeebe9": Phase="Pending", Reason="", readiness=false. Elapsed: 102.977122ms Jan 24 18:44:27.149: INFO: Pod "pod-projected-secrets-91e07749-4125-41da-81bb-7a26e7aeebe9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206664833s �[1mSTEP�[0m: Saw pod success Jan 24 18:44:27.149: INFO: Pod "pod-projected-secrets-91e07749-4125-41da-81bb-7a26e7aeebe9" satisfied condition "Succeeded or Failed" Jan 24 18:44:27.253: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-projected-secrets-91e07749-4125-41da-81bb-7a26e7aeebe9 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:44:27.472: INFO: Waiting for pod pod-projected-secrets-91e07749-4125-41da-81bb-7a26e7aeebe9 to disappear Jan 24 18:44:27.575: INFO: Pod pod-projected-secrets-91e07749-4125-41da-81bb-7a26e7aeebe9 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:44:27.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3884" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":121,"skipped":2122,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:44:27.788: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:44:28.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e516616-c05d-475a-97c0-d7db98abde6d" in namespace "downward-api-3034" to be "Succeeded or Failed" Jan 24 18:44:28.512: INFO: Pod "downwardapi-volume-9e516616-c05d-475a-97c0-d7db98abde6d": Phase="Pending", Reason="", readiness=false. Elapsed: 103.157335ms Jan 24 18:44:30.617: INFO: Pod "downwardapi-volume-9e516616-c05d-475a-97c0-d7db98abde6d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20761379s �[1mSTEP�[0m: Saw pod success Jan 24 18:44:30.617: INFO: Pod "downwardapi-volume-9e516616-c05d-475a-97c0-d7db98abde6d" satisfied condition "Succeeded or Failed" Jan 24 18:44:30.720: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downwardapi-volume-9e516616-c05d-475a-97c0-d7db98abde6d container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:44:30.951: INFO: Waiting for pod downwardapi-volume-9e516616-c05d-475a-97c0-d7db98abde6d to disappear Jan 24 18:44:31.055: INFO: Pod downwardapi-volume-9e516616-c05d-475a-97c0-d7db98abde6d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:44:31.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3034" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":346,"completed":122,"skipped":2130,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mBurst scaling should run to completion even with unhealthy pods [Slow] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:44:31.268: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-9084 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-9084 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-9084 Jan 24 18:44:32.092: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 18:44:42.196: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 24 18:44:42.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9084 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 18:44:43.474: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 18:44:43.474: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 18:44:43.474: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 18:44:43.579: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 24 18:44:53.686: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 18:44:53.686: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 18:44:54.108: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999394s Jan 24 18:44:55.214: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.890326962s Jan 24 18:44:56.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.783289219s Jan 24 18:44:57.427: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.677384985s Jan 24 18:44:58.533: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.570416047s Jan 24 18:44:59.639: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.465235507s Jan 24 18:45:00.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.358473142s Jan 24 18:45:01.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.250141152s Jan 24 18:45:02.965: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.143372491s Jan 24 18:45:04.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 32.408204ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9084 Jan 24 18:45:05.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9084 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 18:45:06.319: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 18:45:06.319: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 18:45:06.319: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 18:45:06.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9084 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 18:45:07.476: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 24 18:45:07.476: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 18:45:07.476: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 18:45:07.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9084 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 18:45:08.586: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 24 18:45:08.586: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 18:45:08.586: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 18:45:08.691: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:45:08.691: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:45:08.691: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 24 18:45:08.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9084 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 18:45:09.920: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 18:45:09.920: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 18:45:09.920: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 18:45:09.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9084 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 18:45:11.045: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 18:45:11.045: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 18:45:11.045: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 18:45:11.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9084 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 18:45:12.162: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 18:45:12.162: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 18:45:12.162: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 18:45:12.162: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 18:45:12.265: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 24 18:45:22.476: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 24 18:45:22.476: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 24 18:45:22.476: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 24 18:45:22.792: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 18:45:22.792: INFO: ss-0 capz-conf-ewh6sx-md-0-xf5qq Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:31 +0000 UTC }] Jan 24 18:45:22.792: INFO: ss-1 capz-conf-ewh6sx-md-0-tb56s Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC }] Jan 24 18:45:22.792: INFO: ss-2 capz-conf-ewh6sx-md-0-xf5qq Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC }] Jan 24 18:45:22.792: INFO: Jan 24 18:45:22.792: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 18:45:23.898: INFO: POD NODE PHASE GRACE CONDITIONS Jan 24 18:45:23.898: INFO: ss-0 capz-conf-ewh6sx-md-0-xf5qq Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:09 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:31 +0000 UTC }] Jan 24 18:45:23.898: INFO: ss-1 capz-conf-ewh6sx-md-0-tb56s Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:11 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC }] Jan 24 18:45:23.898: INFO: ss-2 capz-conf-ewh6sx-md-0-xf5qq Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:45:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-24 18:44:53 +0000 UTC }] Jan 24 18:45:23.898: INFO: Jan 24 18:45:23.898: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 24 18:45:25.001: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.788006157s Jan 24 18:45:26.106: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.684114309s Jan 24 18:45:27.210: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.580303679s Jan 24 18:45:28.313: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.476628696s Jan 24 18:45:29.416: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.373583393s Jan 24 18:45:30.807: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.270422811s Jan 24 18:45:31.910: INFO: Verifying statefulset ss doesn't scale past 0 for another 879.442035ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9084 Jan 24 18:45:33.013: INFO: Scaling statefulset ss to 0 Jan 24 18:45:33.422: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Jan 24 18:45:33.524: INFO: Deleting all statefulset in ns statefulset-9084 Jan 24 18:45:33.626: INFO: Scaling statefulset ss to 0 Jan 24 18:45:33.938: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 18:45:34.041: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:45:34.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-9084" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":346,"completed":123,"skipped":2131,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:45:34.564: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 24 18:45:35.399: INFO: Waiting up to 5m0s for pod "pod-1f7bce62-d80a-4f75-901f-a527880c2775" in namespace "emptydir-6593" to be "Succeeded or Failed" Jan 24 18:45:35.502: INFO: Pod "pod-1f7bce62-d80a-4f75-901f-a527880c2775": Phase="Pending", Reason="", readiness=false. Elapsed: 102.408748ms Jan 24 18:45:37.605: INFO: Pod "pod-1f7bce62-d80a-4f75-901f-a527880c2775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.205999273s �[1mSTEP�[0m: Saw pod success Jan 24 18:45:37.605: INFO: Pod "pod-1f7bce62-d80a-4f75-901f-a527880c2775" satisfied condition "Succeeded or Failed" Jan 24 18:45:37.708: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-1f7bce62-d80a-4f75-901f-a527880c2775 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:45:37.926: INFO: Waiting for pod pod-1f7bce62-d80a-4f75-901f-a527880c2775 to disappear Jan 24 18:45:38.029: INFO: Pod pod-1f7bce62-d80a-4f75-901f-a527880c2775 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:45:38.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-6593" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":124,"skipped":2151,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Subpath�[0m �[90mAtomic writer volumes�[0m �[1mshould support subpaths with configmap pod [LinuxOnly] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:45:38.243: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-x47p �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 24 18:45:39.071: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-x47p" in namespace "subpath-2307" to be "Succeeded or Failed" Jan 24 18:45:39.174: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Pending", Reason="", readiness=false. Elapsed: 102.973306ms Jan 24 18:45:41.278: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 2.206583268s Jan 24 18:45:43.382: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 4.311174892s Jan 24 18:45:45.488: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 6.416512147s Jan 24 18:45:47.592: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 8.52036323s Jan 24 18:45:49.696: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 10.624805632s Jan 24 18:45:51.800: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 12.729177692s Jan 24 18:45:53.905: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 14.8340125s Jan 24 18:45:56.010: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 16.938562035s Jan 24 18:45:58.115: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 19.043810645s Jan 24 18:46:00.221: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Running", Reason="", readiness=true. Elapsed: 21.150290571s Jan 24 18:46:02.327: INFO: Pod "pod-subpath-test-configmap-x47p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 23.255331097s �[1mSTEP�[0m: Saw pod success Jan 24 18:46:02.327: INFO: Pod "pod-subpath-test-configmap-x47p" satisfied condition "Succeeded or Failed" Jan 24 18:46:02.430: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-subpath-test-configmap-x47p container test-container-subpath-configmap-x47p: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:46:02.652: INFO: Waiting for pod pod-subpath-test-configmap-x47p to disappear Jan 24 18:46:02.754: INFO: Pod pod-subpath-test-configmap-x47p no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-x47p Jan 24 18:46:02.755: INFO: Deleting pod "pod-subpath-test-configmap-x47p" in namespace "subpath-2307" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:46:02.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-2307" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":346,"completed":125,"skipped":2164,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] KubeletManagedEtcHosts�[0m �[1mshould test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:46:03.071: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod Jan 24 18:46:03.804: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:46:05.909: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:46:07.910: INFO: The status of Pod test-pod is Running (Ready = true) �[1mSTEP�[0m: Creating hostNetwork=true pod Jan 24 18:46:08.324: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:46:10.429: INFO: The status of Pod test-host-network-pod is Running (Ready = true) �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 24 18:46:10.533: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:10.533: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:11.259: INFO: Exec stderr: "" Jan 24 18:46:11.259: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:11.259: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:11.974: INFO: Exec stderr: "" Jan 24 18:46:11.974: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:11.974: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:12.688: INFO: Exec stderr: "" Jan 24 18:46:12.688: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:12.688: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:13.393: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 24 18:46:13.393: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:13.393: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:14.112: INFO: Exec stderr: "" Jan 24 18:46:14.112: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:14.112: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:14.827: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 24 18:46:14.827: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:14.827: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:15.554: INFO: Exec stderr: "" Jan 24 18:46:15.554: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:15.554: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:16.253: INFO: Exec stderr: "" Jan 24 18:46:16.253: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:16.253: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:16.964: INFO: Exec stderr: "" Jan 24 18:46:16.964: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1633 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 18:46:16.964: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 18:46:17.655: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:46:17.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-1633" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":126,"skipped":2183,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] ResourceQuota�[0m �[1mshould create a ResourceQuota and capture the life of a configMap. [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:46:17.870: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status captures configMap creation �[1mSTEP�[0m: Deleting a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:46:47.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-8014" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":346,"completed":127,"skipped":2194,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Services�[0m �[1mshould be able to change the type from ExternalName to ClusterIP [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:46:47.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-8558 �[1mSTEP�[0m: changing the ExternalName service to type=ClusterIP �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-8558 I0124 18:46:48.387829 14 runners.go:190] Created replication controller with name: externalname-service, namespace: services-8558, replica count: 2 I0124 18:46:51.539221 14 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 24 18:46:51.539: INFO: Creating new exec pod Jan 24 18:46:54.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8558 exec execpodrxmcf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 24 18:46:56.004: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 24 18:46:56.004: INFO: stdout: "" Jan 24 18:46:57.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8558 exec execpodrxmcf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 24 18:46:58.121: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 24 18:46:58.121: INFO: stdout: "externalname-service-5kcrb" Jan 24 18:46:58.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8558 exec execpodrxmcf -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.108.112.174 80' Jan 24 18:46:59.243: INFO: stderr: "+ nc -v -t -w 2 10.108.112.174 80\n+ echo hostName\nConnection to 10.108.112.174 80 port [tcp/http] succeeded!\n" Jan 24 18:46:59.243: INFO: stdout: "externalname-service-5kcrb" Jan 24 18:46:59.243: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:46:59.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8558" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":346,"completed":128,"skipped":2205,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]�[0m �[1mupdates the published spec when one version gets renamed [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:46:59.786: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: set up a multi version CRD Jan 24 18:47:00.301: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:47:27.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7552" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":346,"completed":129,"skipped":2219,"failed":0} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:47:27.642: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 24 18:47:28.262: INFO: Waiting up to 5m0s for pod "pod-3512a2df-f8a0-46c7-9385-77205fa6af75" in namespace "emptydir-740" to be "Succeeded or Failed" Jan 24 18:47:28.364: INFO: Pod "pod-3512a2df-f8a0-46c7-9385-77205fa6af75": Phase="Pending", Reason="", readiness=false. Elapsed: 102.347624ms Jan 24 18:47:30.469: INFO: Pod "pod-3512a2df-f8a0-46c7-9385-77205fa6af75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20676444s �[1mSTEP�[0m: Saw pod success Jan 24 18:47:30.469: INFO: Pod "pod-3512a2df-f8a0-46c7-9385-77205fa6af75" satisfied condition "Succeeded or Failed" Jan 24 18:47:30.573: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-3512a2df-f8a0-46c7-9385-77205fa6af75 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:47:30.813: INFO: Waiting for pod pod-3512a2df-f8a0-46c7-9385-77205fa6af75 to disappear Jan 24 18:47:30.917: INFO: Pod pod-3512a2df-f8a0-46c7-9385-77205fa6af75 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:47:30.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-740" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":130,"skipped":2220,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mshould validate Statefulset Status endpoints [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:47:31.133: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-8712 [It] should validate Statefulset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-8712 Jan 24 18:47:32.064: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Jan 24 18:47:42.170: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Patch Statefulset to include a label �[1mSTEP�[0m: Getting /status Jan 24 18:47:42.586: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) �[1mSTEP�[0m: updating the StatefulSet Status Jan 24 18:47:42.793: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the statefulset status to be updated Jan 24 18:47:42.896: INFO: Observed &StatefulSet event: ADDED Jan 24 18:47:42.896: INFO: Found Statefulset ss in namespace statefulset-8712 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 24 18:47:42.896: INFO: Statefulset ss has an updated status �[1mSTEP�[0m: patching the Statefulset Status Jan 24 18:47:42.896: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Jan 24 18:47:43.007: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Statefulset status to be patched Jan 24 18:47:43.110: INFO: Observed &StatefulSet event: ADDED [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Jan 24 18:47:43.110: INFO: Deleting all statefulset in ns statefulset-8712 Jan 24 18:47:43.213: INFO: Scaling statefulset ss to 0 Jan 24 18:47:53.631: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 18:47:53.739: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:47:54.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8712" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":346,"completed":131,"skipped":2269,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected downwardAPI�[0m �[1mshould set mode on item file [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:47:54.266: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:47:54.893: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0f07d09-b91b-452c-ab05-12b236a5e3ff" in namespace "projected-6568" to be "Succeeded or Failed" Jan 24 18:47:54.996: INFO: Pod "downwardapi-volume-f0f07d09-b91b-452c-ab05-12b236a5e3ff": Phase="Pending", Reason="", readiness=false. Elapsed: 102.504895ms Jan 24 18:47:57.101: INFO: Pod "downwardapi-volume-f0f07d09-b91b-452c-ab05-12b236a5e3ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.20823982s �[1mSTEP�[0m: Saw pod success Jan 24 18:47:57.101: INFO: Pod "downwardapi-volume-f0f07d09-b91b-452c-ab05-12b236a5e3ff" satisfied condition "Succeeded or Failed" Jan 24 18:47:57.206: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downwardapi-volume-f0f07d09-b91b-452c-ab05-12b236a5e3ff container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:47:57.433: INFO: Waiting for pod downwardapi-volume-f0f07d09-b91b-452c-ab05-12b236a5e3ff to disappear Jan 24 18:47:57.536: INFO: Pod downwardapi-volume-f0f07d09-b91b-452c-ab05-12b236a5e3ff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:47:57.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6568" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":132,"skipped":2281,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-instrumentation] Events API�[0m �[1mshould delete a collection of events [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:47:57.755: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Jan 24 18:47:58.692: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:47:58.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-9290" for this suite. �[32m•�[0m{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":346,"completed":133,"skipped":2344,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-instrumentation] Events�[0m �[1mshould delete a collection of events [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:47:59.126: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create set of events Jan 24 18:47:59.754: INFO: created test-event-1 Jan 24 18:47:59.858: INFO: created test-event-2 Jan 24 18:47:59.961: INFO: created test-event-3 �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete collection of events Jan 24 18:48:00.065: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity Jan 24 18:48:00.181: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:48:00.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-4411" for this suite. �[32m•�[0m{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":346,"completed":134,"skipped":2378,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] ConfigMap�[0m �[1mupdates should be reflected in volume [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:48:00.499: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-6aa1c422-6fb0-4998-8638-2a423fbfde8e �[1mSTEP�[0m: Creating the pod Jan 24 18:48:01.440: INFO: The status of Pod pod-configmaps-f10cf2e4-7b37-4c69-84f8-44fb61674f08 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:48:03.546: INFO: The status of Pod pod-configmaps-f10cf2e4-7b37-4c69-84f8-44fb61674f08 is Pending, waiting for it to be Running (with Ready = true) Jan 24 18:48:05.545: INFO: The status of Pod pod-configmaps-f10cf2e4-7b37-4c69-84f8-44fb61674f08 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap configmap-test-upd-6aa1c422-6fb0-4998-8638-2a423fbfde8e �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:49:21.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5529" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":346,"completed":135,"skipped":2393,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] NoExecuteTaintManager Multiple Pods [Serial]�[0m �[1mevicts pods with minTolerationSeconds [Disruptive] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:49:22.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename taint-multiple-pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/taints.go:345 Jan 24 18:49:22.521: INFO: Waiting up to 1m0s for all nodes to be ready Jan 24 18:50:23.301: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Disruptive] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:50:23.404: INFO: Starting informer... �[1mSTEP�[0m: Starting pods... Jan 24 18:50:23.720: INFO: Pod1 is running on capz-conf-ewh6sx-md-0-tb56s. Tainting Node Jan 24 18:50:26.239: INFO: Pod2 is running on capz-conf-ewh6sx-md-0-tb56s. Tainting Node �[1mSTEP�[0m: Trying to apply a taint on the Node �[1mSTEP�[0m: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute �[1mSTEP�[0m: Waiting for Pod1 and Pod2 to be deleted Jan 24 18:50:32.857: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Jan 24 18:50:52.898: INFO: Noticed Pod "taint-eviction-b2" gets evicted. �[1mSTEP�[0m: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:50:53.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "taint-multiple-pods-9253" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":346,"completed":136,"skipped":2412,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mProxy server�[0m �[1mshould support --unix-socket=/path [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:50:53.433: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Starting the proxy Jan 24 18:50:53.951: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5869 proxy --unix-socket=/tmp/kubectl-proxy-unix219213456/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:50:54.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5869" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":346,"completed":137,"skipped":2434,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] DisruptionController�[0m �[1mshould update/patch PodDisruptionBudget status [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:50:54.225: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Updating PodDisruptionBudget status �[1mSTEP�[0m: Waiting for all pods to be running Jan 24 18:50:55.160: INFO: running pods: 0 < 1 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Patching PodDisruptionBudget status �[1mSTEP�[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:50:58.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2224" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":346,"completed":138,"skipped":2441,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] DNS�[0m �[1mshould provide DNS for ExternalName services [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:50:58.311: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2800.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2800.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2800.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2800.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 24 18:51:21.559: INFO: DNS probes using dns-test-ec27042c-eb64-4b7c-b620-c1b024932e0e succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2800.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2800.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2800.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2800.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 24 18:51:24.406: INFO: File wheezy_udp@dns-test-service-3.dns-2800.svc.cluster.local from pod dns-2800/dns-test-11514c3b-a8a3-45cf-b3e7-919cef8edc66 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 18:51:24.509: INFO: File jessie_udp@dns-test-service-3.dns-2800.svc.cluster.local from pod dns-2800/dns-test-11514c3b-a8a3-45cf-b3e7-919cef8edc66 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 18:51:24.509: INFO: Lookups using dns-2800/dns-test-11514c3b-a8a3-45cf-b3e7-919cef8edc66 failed for: [wheezy_udp@dns-test-service-3.dns-2800.svc.cluster.local jessie_udp@dns-test-service-3.dns-2800.svc.cluster.local] Jan 24 18:51:29.613: INFO: File wheezy_udp@dns-test-service-3.dns-2800.svc.cluster.local from pod dns-2800/dns-test-11514c3b-a8a3-45cf-b3e7-919cef8edc66 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 18:51:29.716: INFO: File jessie_udp@dns-test-service-3.dns-2800.svc.cluster.local from pod dns-2800/dns-test-11514c3b-a8a3-45cf-b3e7-919cef8edc66 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 24 18:51:29.716: INFO: Lookups using dns-2800/dns-test-11514c3b-a8a3-45cf-b3e7-919cef8edc66 failed for: [wheezy_udp@dns-test-service-3.dns-2800.svc.cluster.local jessie_udp@dns-test-service-3.dns-2800.svc.cluster.local] Jan 24 18:51:34.718: INFO: DNS probes using dns-test-11514c3b-a8a3-45cf-b3e7-919cef8edc66 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2800.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2800.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2800.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2800.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 24 18:51:37.777: INFO: DNS probes using dns-test-0ad8286b-abfc-44b4-8836-fd5937222b2f succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:51:38.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2800" for this suite. �[32m•�[0m{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":346,"completed":139,"skipped":2455,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]�[0m �[1mcustom resource defaulting for requests and from storage works [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:51:38.219: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 18:51:38.738: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:51:41.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-7200" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":346,"completed":140,"skipped":2477,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule jobs when suspended [Slow] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:51:42.144: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:56:43.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-7100" for this suite. �[32m• [SLOW TEST:301.328 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":346,"completed":141,"skipped":2480,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected configMap�[0m �[1mshould be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:56:43.472: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-3fb2eda8-d379-4bf8-98be-a6431f4791b2 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 24 18:56:44.207: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e027ef68-e785-4377-9a0a-5f963117c839" in namespace "projected-2689" to be "Succeeded or Failed" Jan 24 18:56:44.309: INFO: Pod "pod-projected-configmaps-e027ef68-e785-4377-9a0a-5f963117c839": Phase="Pending", Reason="", readiness=false. Elapsed: 102.038834ms Jan 24 18:56:46.412: INFO: Pod "pod-projected-configmaps-e027ef68-e785-4377-9a0a-5f963117c839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.205563963s �[1mSTEP�[0m: Saw pod success Jan 24 18:56:46.412: INFO: Pod "pod-projected-configmaps-e027ef68-e785-4377-9a0a-5f963117c839" satisfied condition "Succeeded or Failed" Jan 24 18:56:46.515: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-projected-configmaps-e027ef68-e785-4377-9a0a-5f963117c839 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:56:46.744: INFO: Waiting for pod pod-projected-configmaps-e027ef68-e785-4377-9a0a-5f963117c839 to disappear Jan 24 18:56:46.846: INFO: Pod pod-projected-configmaps-e027ef68-e785-4377-9a0a-5f963117c839 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:56:46.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2689" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":142,"skipped":2492,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected downwardAPI�[0m �[1mshould provide container's cpu limit [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:56:47.060: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:56:47.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cd87ba9-33b1-4063-9f5e-618f9166a902" in namespace "projected-9530" to be "Succeeded or Failed" Jan 24 18:56:47.785: INFO: Pod "downwardapi-volume-2cd87ba9-33b1-4063-9f5e-618f9166a902": Phase="Pending", Reason="", readiness=false. Elapsed: 102.551376ms Jan 24 18:56:49.892: INFO: Pod "downwardapi-volume-2cd87ba9-33b1-4063-9f5e-618f9166a902": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209618965s �[1mSTEP�[0m: Saw pod success Jan 24 18:56:49.892: INFO: Pod "downwardapi-volume-2cd87ba9-33b1-4063-9f5e-618f9166a902" satisfied condition "Succeeded or Failed" Jan 24 18:56:49.995: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod downwardapi-volume-2cd87ba9-33b1-4063-9f5e-618f9166a902 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:56:50.225: INFO: Waiting for pod downwardapi-volume-2cd87ba9-33b1-4063-9f5e-618f9166a902 to disappear Jan 24 18:56:50.335: INFO: Pod downwardapi-volume-2cd87ba9-33b1-4063-9f5e-618f9166a902 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:56:50.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9530" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":346,"completed":143,"skipped":2505,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected downwardAPI�[0m �[1mshould provide container's memory request [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:56:50.551: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 24 18:56:51.176: INFO: Waiting up to 5m0s for pod "downwardapi-volume-824ad791-8558-46df-b5ee-b39090513983" in namespace "projected-7547" to be "Succeeded or Failed" Jan 24 18:56:51.279: INFO: Pod "downwardapi-volume-824ad791-8558-46df-b5ee-b39090513983": Phase="Pending", Reason="", readiness=false. Elapsed: 103.185546ms Jan 24 18:56:53.387: INFO: Pod "downwardapi-volume-824ad791-8558-46df-b5ee-b39090513983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.211194673s �[1mSTEP�[0m: Saw pod success Jan 24 18:56:53.387: INFO: Pod "downwardapi-volume-824ad791-8558-46df-b5ee-b39090513983" satisfied condition "Succeeded or Failed" Jan 24 18:56:53.490: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downwardapi-volume-824ad791-8558-46df-b5ee-b39090513983 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:56:53.724: INFO: Waiting for pod downwardapi-volume-824ad791-8558-46df-b5ee-b39090513983 to disappear Jan 24 18:56:53.826: INFO: Pod downwardapi-volume-824ad791-8558-46df-b5ee-b39090513983 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:56:53.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7547" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":346,"completed":144,"skipped":2521,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Downward API�[0m �[1mshould provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:56:54.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 24 18:56:54.663: INFO: Waiting up to 5m0s for pod "downward-api-a42fceae-8acb-42bb-a899-b871b49f2f04" in namespace "downward-api-8961" to be "Succeeded or Failed" Jan 24 18:56:54.766: INFO: Pod "downward-api-a42fceae-8acb-42bb-a899-b871b49f2f04": Phase="Pending", Reason="", readiness=false. Elapsed: 102.212434ms Jan 24 18:56:56.870: INFO: Pod "downward-api-a42fceae-8acb-42bb-a899-b871b49f2f04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.206474231s �[1mSTEP�[0m: Saw pod success Jan 24 18:56:56.870: INFO: Pod "downward-api-a42fceae-8acb-42bb-a899-b871b49f2f04" satisfied condition "Succeeded or Failed" Jan 24 18:56:56.973: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod downward-api-a42fceae-8acb-42bb-a899-b871b49f2f04 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:56:57.189: INFO: Waiting for pod downward-api-a42fceae-8acb-42bb-a899-b871b49f2f04 to disappear Jan 24 18:56:57.291: INFO: Pod downward-api-a42fceae-8acb-42bb-a899-b871b49f2f04 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:56:57.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8961" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":346,"completed":145,"skipped":2532,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mShould recreate evicted statefulset [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:56:57.504: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-6025 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Looking for a node to schedule stateful set and pod �[1mSTEP�[0m: Creating pod with conflicting port in namespace statefulset-6025 �[1mSTEP�[0m: Waiting until pod test-pod will start running in namespace statefulset-6025 �[1mSTEP�[0m: Creating statefulset with conflicting port in namespace statefulset-6025 �[1mSTEP�[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6025 Jan 24 18:57:00.867: INFO: Observed stateful pod in namespace: statefulset-6025, name: ss-0, uid: 542e52ab-1345-488c-b991-0b10981d88a0, status phase: Pending. Waiting for statefulset controller to delete. Jan 24 18:57:00.867: INFO: Observed stateful pod in namespace: statefulset-6025, name: ss-0, uid: 542e52ab-1345-488c-b991-0b10981d88a0, status phase: Failed. Waiting for statefulset controller to delete. Jan 24 18:57:00.868: INFO: Observed stateful pod in namespace: statefulset-6025, name: ss-0, uid: 542e52ab-1345-488c-b991-0b10981d88a0, status phase: Failed. Waiting for statefulset controller to delete. Jan 24 18:57:00.868: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6025 �[1mSTEP�[0m: Removing pod with conflicting port in namespace statefulset-6025 �[1mSTEP�[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6025 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Jan 24 18:57:05.296: INFO: Deleting all statefulset in ns statefulset-6025 Jan 24 18:57:05.399: INFO: Scaling statefulset ss to 0 Jan 24 18:57:15.815: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 18:57:15.918: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:16.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6025" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":346,"completed":146,"skipped":2534,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]�[0m �[1mpatching/updating a validating webhook should work [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:16.449: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 18:57:18.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183438, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183438, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183438, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183437, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78988fc6cd\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 18:57:21.558: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:22.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4714" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4714-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":346,"completed":147,"skipped":2565,"failed":0} �[90m------------------------------�[0m �[0m[sig-node] Pods Extended�[0m �[90mPods Set QOS Class�[0m �[1mshould be set on Pods with matching resource requests and limits for memory and cpu [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:23.485: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:24.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3016" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":346,"completed":148,"skipped":2565,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mshould support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:24.426: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 24 18:57:25.059: INFO: Waiting up to 5m0s for pod "pod-2c83032f-6277-4fad-bee1-d50a1ce8c12c" in namespace "emptydir-9011" to be "Succeeded or Failed" Jan 24 18:57:25.161: INFO: Pod "pod-2c83032f-6277-4fad-bee1-d50a1ce8c12c": Phase="Pending", Reason="", readiness=false. Elapsed: 102.682425ms Jan 24 18:57:27.266: INFO: Pod "pod-2c83032f-6277-4fad-bee1-d50a1ce8c12c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.207123202s �[1mSTEP�[0m: Saw pod success Jan 24 18:57:27.266: INFO: Pod "pod-2c83032f-6277-4fad-bee1-d50a1ce8c12c" satisfied condition "Succeeded or Failed" Jan 24 18:57:27.369: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-xf5qq pod pod-2c83032f-6277-4fad-bee1-d50a1ce8c12c container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:57:27.590: INFO: Waiting for pod pod-2c83032f-6277-4fad-bee1-d50a1ce8c12c to disappear Jan 24 18:57:27.692: INFO: Pod pod-2c83032f-6277-4fad-bee1-d50a1ce8c12c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:27.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-9011" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":346,"completed":149,"skipped":2578,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-auth] ServiceAccounts�[0m �[1mshould allow opting out of API token automount [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:27.905: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting the auto-created API token Jan 24 18:57:29.444: INFO: created pod pod-service-account-defaultsa Jan 24 18:57:29.444: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 24 18:57:29.548: INFO: created pod pod-service-account-mountsa Jan 24 18:57:29.548: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 24 18:57:29.652: INFO: created pod pod-service-account-nomountsa Jan 24 18:57:29.652: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 24 18:57:29.759: INFO: created pod pod-service-account-defaultsa-mountspec Jan 24 18:57:29.759: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 24 18:57:29.865: INFO: created pod pod-service-account-mountsa-mountspec Jan 24 18:57:29.865: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 24 18:57:29.969: INFO: created pod pod-service-account-nomountsa-mountspec Jan 24 18:57:29.970: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 24 18:57:30.075: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 24 18:57:30.075: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 24 18:57:30.183: INFO: created pod pod-service-account-mountsa-nomountspec Jan 24 18:57:30.183: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 24 18:57:30.288: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 24 18:57:30.289: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:30.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-4651" for this suite. �[32m•�[0m{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":346,"completed":150,"skipped":2582,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl diff�[0m �[1mshould check if kubectl diff finds a difference for Deployments [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:30.504: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create deployment with httpd image Jan 24 18:57:31.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5852 create -f -' Jan 24 18:57:32.218: INFO: stderr: "" Jan 24 18:57:32.218: INFO: stdout: "deployment.apps/httpd-deployment created\n" �[1mSTEP�[0m: verify diff finds difference between live and declared image Jan 24 18:57:32.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5852 diff -f -' Jan 24 18:57:32.968: INFO: rc: 1 Jan 24 18:57:32.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5852 delete -f -' Jan 24 18:57:33.534: INFO: stderr: "" Jan 24 18:57:33.534: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:33.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5852" for this suite. �[32m•�[0m{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":346,"completed":151,"skipped":2601,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Secrets�[0m �[1mshould fail to create secret due to empty secret key [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:33.755: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-6cf291a5-bc56-40af-a3ee-65e309132e41 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:34.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5118" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":346,"completed":152,"skipped":2664,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Secrets�[0m �[1mshould be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:34.591: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-6adffc26-b0be-4141-bced-e622078eaf9c �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 24 18:57:35.736: INFO: Waiting up to 5m0s for pod "pod-secrets-e6fb28fb-bda1-41fa-88fe-bc6572b0add3" in namespace "secrets-6999" to be "Succeeded or Failed" Jan 24 18:57:35.841: INFO: Pod "pod-secrets-e6fb28fb-bda1-41fa-88fe-bc6572b0add3": Phase="Pending", Reason="", readiness=false. Elapsed: 105.321401ms Jan 24 18:57:37.945: INFO: Pod "pod-secrets-e6fb28fb-bda1-41fa-88fe-bc6572b0add3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.209152709s �[1mSTEP�[0m: Saw pod success Jan 24 18:57:37.945: INFO: Pod "pod-secrets-e6fb28fb-bda1-41fa-88fe-bc6572b0add3" satisfied condition "Succeeded or Failed" Jan 24 18:57:38.051: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-secrets-e6fb28fb-bda1-41fa-88fe-bc6572b0add3 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 24 18:57:38.276: INFO: Waiting for pod pod-secrets-e6fb28fb-bda1-41fa-88fe-bc6572b0add3 to disappear Jan 24 18:57:38.380: INFO: Pod pod-secrets-e6fb28fb-bda1-41fa-88fe-bc6572b0add3 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:57:38.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6999" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-7927" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":346,"completed":153,"skipped":2683,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected configMap�[0m �[1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:57:38.706: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-4febf4fb-4f92-45a1-8c2e-cf4c436856ad �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 24 18:57:39.437: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70" in namespace "projected-3444" to be "Succeeded or Failed" Jan 24 18:57:39.540: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 102.847973ms Jan 24 18:57:41.645: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.20790979s Jan 24 18:57:43.749: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312522249s Jan 24 18:57:45.856: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.41897454s Jan 24 18:57:47.961: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 8.524306869s Jan 24 18:57:50.068: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 10.630995046s Jan 24 18:57:52.181: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743846918s Jan 24 18:57:54.286: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 14.848835035s Jan 24 18:57:56.391: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 16.954009072s Jan 24 18:57:58.496: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 19.058897481s Jan 24 18:58:00.601: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 21.164069136s Jan 24 18:58:02.706: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 23.268745383s Jan 24 18:58:04.810: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 25.373049132s Jan 24 18:58:06.914: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 27.476823012s Jan 24 18:58:09.019: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 29.581813551s Jan 24 18:58:11.123: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 31.686361449s Jan 24 18:58:13.229: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 33.79253373s Jan 24 18:58:15.335: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 35.897870085s Jan 24 18:58:17.439: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 38.002056443s Jan 24 18:58:19.544: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 40.106948097s Jan 24 18:58:21.648: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 42.211215035s Jan 24 18:58:23.867: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 44.430229933s Jan 24 18:58:25.973: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 46.536090889s Jan 24 18:58:28.077: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 48.640430599s Jan 24 18:58:30.182: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 50.745118205s Jan 24 18:58:32.287: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 52.850405349s Jan 24 18:58:34.391: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 54.954639271s Jan 24 18:58:36.496: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 57.059075054s Jan 24 18:58:38.599: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 59.162700784s Jan 24 18:58:40.704: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m1.267675914s Jan 24 18:58:42.809: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m3.372341979s Jan 24 18:58:44.913: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m5.476559512s Jan 24 18:58:47.026: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m7.589110624s Jan 24 18:58:49.131: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m9.693876843s Jan 24 18:58:51.235: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m11.798102057s Jan 24 18:58:53.343: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m13.906378139s Jan 24 18:58:55.448: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m16.010894235s Jan 24 18:58:57.552: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m18.115478647s Jan 24 18:58:59.657: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m20.220698838s Jan 24 18:59:01.762: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m22.325050959s Jan 24 18:59:03.867: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m24.430331445s Jan 24 18:59:05.973: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m26.53593364s Jan 24 18:59:08.079: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m28.642158672s Jan 24 18:59:10.184: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m30.747171487s Jan 24 18:59:12.290: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m32.852880801s Jan 24 18:59:14.394: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m34.95699116s Jan 24 18:59:16.498: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m37.061214958s Jan 24 18:59:18.602: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m39.165402721s Jan 24 18:59:20.707: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m41.270322024s Jan 24 18:59:22.811: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m43.374640678s Jan 24 18:59:24.917: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m45.48013589s Jan 24 18:59:27.022: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m47.584794724s Jan 24 18:59:29.137: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m49.700626827s Jan 24 18:59:31.247: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m51.810613253s Jan 24 18:59:33.352: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m53.914750689s Jan 24 18:59:35.456: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m56.018747722s Jan 24 18:59:37.560: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 1m58.122872673s Jan 24 18:59:39.665: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 2m0.228002319s Jan 24 18:59:41.769: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Pending", Reason="", readiness=false. Elapsed: 2m2.332278426s Jan 24 18:59:43.873: INFO: Pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70": Phase="Failed", Reason="", readiness=false. Elapsed: 2m4.436693646s Jan 24 18:59:44.095: INFO: Output of node "capz-conf-ewh6sx-md-0-tb56s" pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70" container "agnhost-container": �[1mSTEP�[0m: delete the pod Jan 24 18:59:44.209: INFO: Waiting for pod pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70 to disappear Jan 24 18:59:44.311: INFO: Pod pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70 no longer exists Jan 24 18:59:44.312: FAIL: Unexpected error: <*errors.errorString | 0xc004bf84c0>: { s: "expected pod \"pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70\" success: pod \"pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.4 PodIP:192.168.237.141 PodIPs:[{IP:192.168.237.141}] StartTime:2023-01-24 18:57:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb,}} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb Started:0xc003902658}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } expected pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70" success: pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.4 PodIP:192.168.237.141 PodIPs:[{IP:192.168.237.141}] StartTime:2023-01-24 18:57:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb,}} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb Started:0xc003902658}] QOSClass:BestEffort EphemeralContainerStatuses:[]} occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0000231e0, 0x70963a1, 0x12, 0xc003d3ec00, 0x0, 0xc0031adbe0, 0x2, 0x2, 0x72d7460) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:571 k8s.io/kubernetes/test/e2e/common/storage.doProjectedConfigMapE2EWithMappings(0xc0000231e0, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:564 +0x61e k8s.io/kubernetes/test/e2e/common/storage.glob..func7.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:87 +0x37 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000ce6480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000ce6480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000ce6480, 0x72d42e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 18:59:44.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3444" for this suite. �[91m�[1m• Failure [125.839 seconds]�[0m [sig-storage] Projected configMap �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23�[0m �[91m�[1mshould be consumable from pods in volume with mappings [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mJan 24 18:59:44.312: Unexpected error: <*errors.errorString | 0xc004bf84c0>: { s: "expected pod \"pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70\" success: pod \"pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70\" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.4 PodIP:192.168.237.141 PodIPs:[{IP:192.168.237.141}] StartTime:2023-01-24 18:57:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb,}} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb Started:0xc003902658}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } expected pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70" success: pod "pod-projected-configmaps-6bbe4ea8-6e40-425d-9bda-b740f8c1ea70" failed with status: {Phase:Failed Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [agnhost-container]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-24 18:57:39 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.1.0.4 PodIP:192.168.237.141 PodIPs:[{IP:192.168.237.141}] StartTime:2023-01-24 18:57:39 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:agnhost-container State:{Waiting:&ContainerStateWaiting{Reason:RunContainerError,Message:context deadline exceeded,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:,Message:,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb,}} Ready:false RestartCount:0 Image:k8s.gcr.io/e2e-test-images/agnhost:2.32 ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:758db666ac7028534dba72e7e9bb1e57bb81b8196f976f7a5cc351ef8b3529e1 ContainerID:containerd://bccee7a8104e5617f88f89562686d4546a33f6bce0591e4c89071dbe292d75eb Started:0xc003902658}] QOSClass:BestEffort EphemeralContainerStatuses:[]} occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 �[91mFull Stack Trace�[0m k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc0000231e0, 0x70963a1, 0x12, 0xc003d3ec00, 0x0, 0xc0031adbe0, 0x2, 0x2, 0x72d7460) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:742 +0x1e5 k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutputRegexp(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:571 k8s.io/kubernetes/test/e2e/common/storage.doProjectedConfigMapE2EWithMappings(0xc0000231e0, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:564 +0x61e k8s.io/kubernetes/test/e2e/common/storage.glob..func7.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_configmap.go:87 +0x37 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000ce6480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000ce6480) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x2b testing.tRunner(0xc000ce6480, 0x72d42e8) /usr/local/go/src/testing/testing.go:1193 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1238 +0x2b3 �[90m------------------------------�[0m {"msg":"FAILED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":346,"completed":153,"skipped":2778,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] StatefulSet�[0m �[90mBasic StatefulSet functionality [StatefulSetBasic]�[0m �[1mshould perform rolling updates and roll backs of template modifications [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 18:59:44.546: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:92 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:107 �[1mSTEP�[0m: Creating service test in namespace statefulset-6773 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a new StatefulSet Jan 24 18:59:45.374: INFO: Found 1 stateful pods, waiting for 3 Jan 24 18:59:55.481: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:59:55.481: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:59:55.481: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 24 18:59:55.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6773 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 18:59:57.054: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 18:59:57.054: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 18:59:57.054: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-1 Jan 24 18:59:57.490: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Jan 24 18:59:57.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6773 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 18:59:58.940: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 18:59:58.940: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 18:59:58.940: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' �[1mSTEP�[0m: Rolling back to a previous revision Jan 24 19:00:09.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6773 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 24 19:00:10.711: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 24 19:00:10.711: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 24 19:00:10.711: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 24 19:00:21.347: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Rolling back update in reverse ordinal order Jan 24 19:00:21.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6773 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 24 19:00:22.794: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 24 19:00:22.794: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 24 19:00:22.794: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 24 19:00:33.521: INFO: Waiting for StatefulSet statefulset-6773/ss2 to complete update [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:118 Jan 24 19:00:43.733: INFO: Deleting all statefulset in ns statefulset-6773 Jan 24 19:00:43.837: INFO: Scaling statefulset ss2 to 0 Jan 24 19:00:54.255: INFO: Waiting for statefulset status.replicas updated to 0 Jan 24 19:00:54.358: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:00:54.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6773" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":346,"completed":154,"skipped":2807,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Sysctls [LinuxOnly] [NodeConformance]�[0m �[1mshould reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:00:54.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:00:55.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-2271" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":346,"completed":155,"skipped":2815,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] CronJob�[0m �[1mshould not schedule new jobs when ForbidConcurrent [Slow] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:00:55.791: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a ForbidConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring no more jobs are scheduled �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:01.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-1367" for this suite. �[32m• [SLOW TEST:305.462 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":346,"completed":156,"skipped":2855,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] EndpointSlice�[0m �[1mshould create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:01.255: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:02.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-2126" for this suite. �[32m•�[0m{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":346,"completed":157,"skipped":2908,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Downward API volume�[0m �[1mshould update labels on modification [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:02.674: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating the pod Jan 24 19:06:03.396: INFO: The status of Pod labelsupdate5e8fa57d-7f34-485e-884a-6cb6b3146186 is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:06:05.501: INFO: The status of Pod labelsupdate5e8fa57d-7f34-485e-884a-6cb6b3146186 is Running (Ready = true) Jan 24 19:06:06.437: INFO: Successfully updated pod "labelsupdate5e8fa57d-7f34-485e-884a-6cb6b3146186" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:08.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2849" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":346,"completed":158,"skipped":2917,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] ReplicaSet�[0m �[1mshould validate Replicaset Status endpoints [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:08.863: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create a Replicaset �[1mSTEP�[0m: Verify that the required pods have come up. Jan 24 19:06:09.691: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Getting /status Jan 24 19:06:12.003: INFO: Replicaset test-rs has Conditions: [] �[1mSTEP�[0m: updating the Replicaset Status Jan 24 19:06:12.212: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the ReplicaSet status to be updated Jan 24 19:06:12.315: INFO: Observed &ReplicaSet event: ADDED Jan 24 19:06:12.315: INFO: Observed &ReplicaSet event: MODIFIED Jan 24 19:06:12.315: INFO: Observed &ReplicaSet event: MODIFIED Jan 24 19:06:12.315: INFO: Observed &ReplicaSet event: MODIFIED Jan 24 19:06:12.315: INFO: Found replicaset test-rs in namespace replicaset-559 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jan 24 19:06:12.315: INFO: Replicaset test-rs has an updated status �[1mSTEP�[0m: patching the Replicaset Status Jan 24 19:06:12.315: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Jan 24 19:06:12.424: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Replicaset status to be patched Jan 24 19:06:12.527: INFO: Observed &ReplicaSet event: ADDED Jan 24 19:06:12.527: INFO: Observed &ReplicaSet event: MODIFIED Jan 24 19:06:12.527: INFO: Observed &ReplicaSet event: MODIFIED Jan 24 19:06:12.528: INFO: Observed &ReplicaSet event: MODIFIED Jan 24 19:06:12.528: INFO: Observed replicaset test-rs in namespace replicaset-559 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 24 19:06:12.529: INFO: Observed &ReplicaSet event: MODIFIED Jan 24 19:06:12.529: INFO: Found replicaset test-rs in namespace replicaset-559 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } Jan 24 19:06:12.529: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:12.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-559" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":346,"completed":159,"skipped":2937,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] Projected configMap�[0m �[1mshould be consumable from pods in volume as non-root [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:12.744: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-2a99fac1-e6d2-4e9d-bd58-79a48156ad52 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 24 19:06:13.470: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271" in namespace "projected-9923" to be "Succeeded or Failed" Jan 24 19:06:13.572: INFO: Pod "pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271": Phase="Pending", Reason="", readiness=false. Elapsed: 102.268565ms Jan 24 19:06:15.677: INFO: Pod "pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207154697s Jan 24 19:06:17.784: INFO: Pod "pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.313676689s �[1mSTEP�[0m: Saw pod success Jan 24 19:06:17.784: INFO: Pod "pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271" satisfied condition "Succeeded or Failed" Jan 24 19:06:17.887: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 19:06:18.105: INFO: Waiting for pod pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271 to disappear Jan 24 19:06:18.207: INFO: Pod pod-projected-configmaps-58ecc774-b663-42d7-83a1-fd88c6684271 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:18.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9923" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":346,"completed":160,"skipped":2941,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-apps] DisruptionController�[0m �[90mListing PodDisruptionBudgets for all namespaces�[0m �[1mshould list and delete a collection of PodDisruptionBudgets [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:18.425: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:18.938: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-7525 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:20.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-9284" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:20.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7525" for this suite. �[32m•�[0m{"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":346,"completed":161,"skipped":2952,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-auth] ServiceAccounts�[0m �[1mshould guarantee kube-root-ca.crt exist in any namespace [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:20.922: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 19:06:21.548: INFO: Got root ca configmap in namespace "svcaccounts-5464" Jan 24 19:06:21.654: INFO: Deleted root ca configmap in namespace "svcaccounts-5464" �[1mSTEP�[0m: waiting for a new root ca configmap created Jan 24 19:06:22.258: INFO: Recreated root ca configmap in namespace "svcaccounts-5464" Jan 24 19:06:22.362: INFO: Updated root ca configmap in namespace "svcaccounts-5464" �[1mSTEP�[0m: waiting for the root ca configmap reconciled Jan 24 19:06:22.966: INFO: Reconciled root ca configmap in namespace "svcaccounts-5464" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:22.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-5464" for this suite. �[32m•�[0m{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":346,"completed":162,"skipped":2960,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]�[0m �[1mshould be able to convert a non homogeneous list of CRs [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:23.180: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 24 19:06:24.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183984, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183984, loc:(*time.Location)(0xa09cc60)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183984, loc:(*time.Location)(0xa09cc60)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63810183984, loc:(*time.Location)(0xa09cc60)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-697cdbd8f4\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 24 19:06:28.008: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 19:06:28.111: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: Create a v2 custom resource �[1mSTEP�[0m: List CRs in v1 �[1mSTEP�[0m: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:31.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-8879" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":346,"completed":163,"skipped":2989,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m �[0m[sig-storage] EmptyDir volumes�[0m �[1mpod should support shared volumes between containers [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:32.454: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating Pod �[1mSTEP�[0m: Reading file content from the nginx-container Jan 24 19:06:35.300: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6871 PodName:pod-sharedvolume-80f29b9a-1dae-4c9b-8971-a7b538cf9e6d ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 24 19:06:35.300: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 24 19:06:36.050: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:36.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-6871" for this suite. �[32m•�[0m{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":346,"completed":164,"skipped":2990,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-network] Proxy�[0m �[90mversion v1�[0m �[1mA set of valid responses are returned for both pod and service ProxyWithPath [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:36.267: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 19:06:36.783: INFO: Creating pod... Jan 24 19:06:37.055: INFO: Pod Quantity: 1 Status: Pending Jan 24 19:06:38.159: INFO: Pod Quantity: 1 Status: Pending Jan 24 19:06:39.158: INFO: Pod Status: Running Jan 24 19:06:39.158: INFO: Creating service... Jan 24 19:06:39.268: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/pods/agnhost/proxy/some/path/with/DELETE Jan 24 19:06:39.372: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jan 24 19:06:39.372: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/pods/agnhost/proxy/some/path/with/GET Jan 24 19:06:39.476: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jan 24 19:06:39.476: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/pods/agnhost/proxy/some/path/with/HEAD Jan 24 19:06:39.579: INFO: http.Client request:HEAD | StatusCode:200 Jan 24 19:06:39.579: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/pods/agnhost/proxy/some/path/with/OPTIONS Jan 24 19:06:39.682: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jan 24 19:06:39.682: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/pods/agnhost/proxy/some/path/with/PATCH Jan 24 19:06:39.785: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jan 24 19:06:39.785: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/pods/agnhost/proxy/some/path/with/POST Jan 24 19:06:39.889: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jan 24 19:06:39.889: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/pods/agnhost/proxy/some/path/with/PUT Jan 24 19:06:39.993: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Jan 24 19:06:39.993: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/services/test-service/proxy/some/path/with/DELETE Jan 24 19:06:40.097: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jan 24 19:06:40.097: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/services/test-service/proxy/some/path/with/GET Jan 24 19:06:40.201: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jan 24 19:06:40.201: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/services/test-service/proxy/some/path/with/HEAD Jan 24 19:06:40.305: INFO: http.Client request:HEAD | StatusCode:200 Jan 24 19:06:40.305: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/services/test-service/proxy/some/path/with/OPTIONS Jan 24 19:06:40.413: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jan 24 19:06:40.413: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/services/test-service/proxy/some/path/with/PATCH Jan 24 19:06:40.517: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jan 24 19:06:40.517: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/services/test-service/proxy/some/path/with/POST Jan 24 19:06:40.621: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jan 24 19:06:40.621: INFO: Starting http.Client for https://capz-conf-ewh6sx-b2098365.northeurope.cloudapp.azure.com:6443/api/v1/namespaces/proxy-3835/services/test-service/proxy/some/path/with/PUT Jan 24 19:06:40.725: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:40.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-3835" for this suite. �[32m•�[0m{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":346,"completed":165,"skipped":2998,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Pods�[0m �[1mshould contain environment variables for services [NodeConformance] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:40.941: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:188 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 19:06:41.667: INFO: The status of Pod server-envvars-62a9104e-46e8-4db3-a677-2974272a4074 is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:06:43.775: INFO: The status of Pod server-envvars-62a9104e-46e8-4db3-a677-2974272a4074 is Pending, waiting for it to be Running (with Ready = true) Jan 24 19:06:45.773: INFO: The status of Pod server-envvars-62a9104e-46e8-4db3-a677-2974272a4074 is Running (Ready = true) Jan 24 19:06:46.106: INFO: Waiting up to 5m0s for pod "client-envvars-67f6fc1a-80e0-43ca-ac6d-866d2b76bed4" in namespace "pods-8978" to be "Succeeded or Failed" Jan 24 19:06:46.209: INFO: Pod "client-envvars-67f6fc1a-80e0-43ca-ac6d-866d2b76bed4": Phase="Pending", Reason="", readiness=false. Elapsed: 102.634475ms Jan 24 19:06:48.311: INFO: Pod "client-envvars-67f6fc1a-80e0-43ca-ac6d-866d2b76bed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.204716008s �[1mSTEP�[0m: Saw pod success Jan 24 19:06:48.311: INFO: Pod "client-envvars-67f6fc1a-80e0-43ca-ac6d-866d2b76bed4" satisfied condition "Succeeded or Failed" Jan 24 19:06:48.420: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod client-envvars-67f6fc1a-80e0-43ca-ac6d-866d2b76bed4 container env3cont: <nil> �[1mSTEP�[0m: delete the pod Jan 24 19:06:48.638: INFO: Waiting for pod client-envvars-67f6fc1a-80e0-43ca-ac6d-866d2b76bed4 to disappear Jan 24 19:06:48.746: INFO: Pod client-envvars-67f6fc1a-80e0-43ca-ac6d-866d2b76bed4 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:48.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8978" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":346,"completed":166,"skipped":3020,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-api-machinery] Watchers�[0m �[1mshould be able to restart watching from the last resource version observed by the previous watch [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:49.003: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Jan 24 19:06:49.923: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1967 470d6503-1823-474d-b477-f1e46482336f 19027 0 2023-01-24 19:06:49 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-24 19:06:49 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 19:06:49.923: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1967 470d6503-1823-474d-b477-f1e46482336f 19028 0 2023-01-24 19:06:49 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-24 19:06:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 24 19:06:50.334: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1967 470d6503-1823-474d-b477-f1e46482336f 19029 0 2023-01-24 19:06:49 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-24 19:06:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 24 19:06:50.334: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-1967 470d6503-1823-474d-b477-f1e46482336f 19030 0 2023-01-24 19:06:49 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-24 19:06:49 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:50.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-1967" for this suite. �[32m•�[0m{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":346,"completed":167,"skipped":3031,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould fail substituting values in a volume subpath with absolute path [Slow] [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:50.584: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Jan 24 19:06:53.408: INFO: Deleting pod "var-expansion-fc1cd976-acb3-4eb2-9b93-39d45b6b9296" in namespace "var-expansion-9026" Jan 24 19:06:53.519: INFO: Wait up to 5m0s for pod "var-expansion-fc1cd976-acb3-4eb2-9b93-39d45b6b9296" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:06:55.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-9026" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":346,"completed":168,"skipped":3031,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-node] Variable Expansion�[0m �[1mshould allow substituting values in a volume subpath [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:06:55.975: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test substitution in volume subpath Jan 24 19:06:56.602: INFO: Waiting up to 5m0s for pod "var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee" in namespace "var-expansion-5127" to be "Succeeded or Failed" Jan 24 19:06:56.710: INFO: Pod "var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee": Phase="Pending", Reason="", readiness=false. Elapsed: 107.803648ms Jan 24 19:06:58.818: INFO: Pod "var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee": Phase="Running", Reason="", readiness=true. Elapsed: 2.216364276s Jan 24 19:07:00.928: INFO: Pod "var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.325889645s �[1mSTEP�[0m: Saw pod success Jan 24 19:07:00.928: INFO: Pod "var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee" satisfied condition "Succeeded or Failed" Jan 24 19:07:01.041: INFO: Trying to get logs from node capz-conf-ewh6sx-md-0-tb56s pod var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 24 19:07:01.278: INFO: Waiting for pod var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee to disappear Jan 24 19:07:01.380: INFO: Pod var-expansion-628ed2cb-c560-4e93-8493-febc68db03ee no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 24 19:07:01.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-5127" for this suite. �[32m•�[0m{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":346,"completed":169,"skipped":3046,"failed":1,"failures":["[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m �[0m[sig-cli] Kubectl client�[0m �[90mKubectl logs�[0m �[1mshould be able to retrieve and filter logs [Conformance]�[0m �[37m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 24 19:07:01.627: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1396 �[1mSTEP�[0m: creating an pod Jan 24 19:07:02.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6089 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.32 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 24 19:07:02.585: INFO: stderr: "" Jan 24 19:07:02.586: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for log generator to start. Jan 24 19:07:02.586: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 24 19:07:02.586: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6089" to be "running and ready, or succeeded" Jan 24 19:07:02.688: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 101.884702ms Jan 24 19:07:04.791: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.205640913s Jan 24 19:07:04.791: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 24 19:07:04.791: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 24 19:07:04.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6089 logs logs-generator logs-generator' Jan 24 19:07:05.318: INFO: stderr: "" Jan 24 19:07:05.318: INFO: stdout: "I0124 19:07:03.690932 1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/rdc 355\nI0124 19:07:03.891123 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/r6gm 239\nI0124 19:07:04.091821 1 logs_generator.go:76] 2 POST /api/v1/namespaces/ns/pods/5kbq 329\nI0124 19:07:04.291262 1 logs_generator.go:76] 3 GET /api/v1/namespaces/default/pods/6tp 559\nI0124 19:07:04.491628 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/p9b 571\nI0124 19:07:04.691989 1 logs_generator.go:76] 5 POST /api/v1/namespaces/kube-system/pods/c6z 527\nI0124 19:07:04.891293 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/ccwb 343\nI0124 19:07:05.091640 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/qnvd 374\n" �[1mSTEP�[0m: limiting log lines Jan 24 19:07:05.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6089 logs logs-generator logs-genera