Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 59m5s |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc001e14198>: { error: <*errors.withMessage | 0xc001046000>{ cause: <*errors.errorString | 0xc000246070>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1a98018, 0x1adc429, 0x7b9731, 0x7b9125, 0x7b87fb, 0x7be569, 0x7bdf52, 0x7df031, 0x7ded56, 0x7de3a5, 0x7e07e5, 0x7ec9c9, 0x7ec7de, 0x1af7d32, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-0z48fg INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-0z48fg" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-dctc5v" using the "upgrades-cgroupfs" template (Kubernetes v1.19.16, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-dctc5v --infrastructure (default) --kubernetes-version v1.19.16 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-dctc5v-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-dctc5v-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-dctc5v-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-dctc5v-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-dctc5v created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-dctc5v-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-dctc5v-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-0z48fg/k8s-upgrade-and-conformance-dctc5v-xsm6m to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-0z48fg/k8s-upgrade-and-conformance-dctc5v-xsm6m to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.20.15 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-0z48fg/k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd to be upgraded to v1.20.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.20.15 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-0z48fg/k8s-upgrade-and-conformance-dctc5v-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-0z48fg/k8s-upgrade-and-conformance-dctc5v-mp-0 to be upgraded from v1.19.16 to v1.20.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.20.15 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-slowSpecThreshold=120" "-nodes=4" "/usr/local/bin/e2e.test" "--" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1673448006�[0m - Will randomize all specs Will run �[1m5668�[0m specs Running in parallel across �[1m4�[0m nodes Jan 11 14:40:08.654: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:40:08.657: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 11 14:40:08.675: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 11 14:40:08.736: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:08.736: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:08.736: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:08.736: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:08.736: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:08.736: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 11 14:40:08.736: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:08.736: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:08.736: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:08.736: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:08.736: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:08.736: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:08.736: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:08.736: INFO: Jan 11 14:40:10.755: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:10.755: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:10.755: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:10.755: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:10.755: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:10.755: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 11 14:40:10.755: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:10.755: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:10.755: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:10.755: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:10.755: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:10.756: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:10.756: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:10.756: INFO: Jan 11 14:40:12.761: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:12.761: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:12.761: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:12.761: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:12.761: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:12.761: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 11 14:40:12.761: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:12.761: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:12.761: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:12.761: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:12.761: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:12.761: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:12.761: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:12.761: INFO: Jan 11 14:40:14.757: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:14.757: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:14.757: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:14.757: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:14.757: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:14.757: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 11 14:40:14.757: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:14.757: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:14.757: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:14.758: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:14.758: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:14.758: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:14.758: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:14.758: INFO: Jan 11 14:40:16.754: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:16.754: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:16.754: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:16.754: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:16.754: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:16.754: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 11 14:40:16.754: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:16.754: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:16.754: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:16.754: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:16.754: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:16.754: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:16.754: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:16.754: INFO: Jan 11 14:40:18.753: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:18.753: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:18.753: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:18.753: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:18.753: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:18.753: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 11 14:40:18.753: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:18.753: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:18.753: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:18.753: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:18.753: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:18.753: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:18.753: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:18.753: INFO: Jan 11 14:40:20.753: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:20.753: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:20.753: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:20.753: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:20.753: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:20.753: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 11 14:40:20.753: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:20.753: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:20.753: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:20.753: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:20.753: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:20.753: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:20.753: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:20.753: INFO: Jan 11 14:40:22.754: INFO: The status of Pod coredns-f9fd979d6-m6wkx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:22.754: INFO: The status of Pod kindnet-5m5rf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:22.754: INFO: The status of Pod kindnet-mgzx7 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:22.754: INFO: The status of Pod kube-proxy-pg4qw is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:22.754: INFO: The status of Pod kube-proxy-xhr4v is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:22.754: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 11 14:40:22.754: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:22.754: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:22.754: INFO: coredns-f9fd979d6-m6wkx k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:36:40 +0000 UTC }] Jan 11 14:40:22.754: INFO: kindnet-5m5rf k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:32:02 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:50 +0000 UTC }] Jan 11 14:40:22.754: INFO: kindnet-mgzx7 k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:31:34 +0000 UTC }] Jan 11 14:40:22.754: INFO: kube-proxy-pg4qw k8s-upgrade-and-conformance-dctc5v-worker-f7a1my Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:38:07 +0000 UTC }] Jan 11 14:40:22.754: INFO: kube-proxy-xhr4v k8s-upgrade-and-conformance-dctc5v-worker-6m63et Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:39:37 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:37:11 +0000 UTC }] Jan 11 14:40:22.754: INFO: Jan 11 14:40:24.753: INFO: The status of Pod coredns-f9fd979d6-jjxxv is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 11 14:40:24.753: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 11 14:40:24.753: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 11 14:40:24.753: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:40:24.753: INFO: coredns-f9fd979d6-jjxxv k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:40:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:40:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:40:23 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:40:23 +0000 UTC }] Jan 11 14:40:24.753: INFO: Jan 11 14:40:26.753: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 11 14:40:26.753: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 11 14:40:26.753: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 11 14:40:26.761: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 11 14:40:26.761: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 11 14:40:26.761: INFO: e2e test version: v1.20.15 Jan 11 14:40:26.762: INFO: kube-apiserver version: v1.20.15 Jan 11 14:40:26.763: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:40:26.767: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 11 14:40:26.790: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:40:26.814: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 11 14:40:26.799: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:40:26.822: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 11 14:40:26.799: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:40:26.835: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:26.830: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services Jan 11 14:40:26.862: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:26.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1738" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":1,"skipped":3,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:26.888: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api Jan 11 14:40:26.936: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 11 14:40:27.008: INFO: Waiting up to 5m0s for pod "downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907" in namespace "downward-api-9440" to be "Succeeded or Failed" Jan 11 14:40:27.031: INFO: Pod "downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907": Phase="Pending", Reason="", readiness=false. Elapsed: 22.622671ms Jan 11 14:40:29.035: INFO: Pod "downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026837016s Jan 11 14:40:31.040: INFO: Pod "downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031740962s Jan 11 14:40:33.104: INFO: Pod "downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907": Phase="Running", Reason="", readiness=true. Elapsed: 6.096004835s Jan 11 14:40:35.108: INFO: Pod "downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100042373s �[1mSTEP�[0m: Saw pod success Jan 11 14:40:35.108: INFO: Pod "downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907" satisfied condition "Succeeded or Failed" Jan 11 14:40:35.112: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:40:35.139: INFO: Waiting for pod downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907 to disappear Jan 11 14:40:35.142: INFO: Pod downward-api-96feabd4-edb2-449f-bf59-b08c4d72d907 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:35.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9440" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":35,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:26.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook Jan 11 14:40:27.078: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:40:27.632: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 11 14:40:29.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:40:31.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044827, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:40:34.662: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:35.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6672" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6672-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":1,"skipped":56,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:35.971: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Create set of pods Jan 11 14:40:36.010: INFO: created test-pod-1 Jan 11 14:40:36.014: INFO: created test-pod-2 Jan 11 14:40:36.019: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be located �[1mSTEP�[0m: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:36.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4327" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":2,"skipped":83,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:36.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name projected-secret-test-e01163a3-0c0f-47c9-a89d-264eacccabb2 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:40:36.148: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e420b7e1-5204-46b1-b691-ae8f8736c8b2" in namespace "projected-9752" to be "Succeeded or Failed" Jan 11 14:40:36.152: INFO: Pod "pod-projected-secrets-e420b7e1-5204-46b1-b691-ae8f8736c8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.68613ms Jan 11 14:40:38.156: INFO: Pod "pod-projected-secrets-e420b7e1-5204-46b1-b691-ae8f8736c8b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00737175s �[1mSTEP�[0m: Saw pod success Jan 11 14:40:38.156: INFO: Pod "pod-projected-secrets-e420b7e1-5204-46b1-b691-ae8f8736c8b2" satisfied condition "Succeeded or Failed" Jan 11 14:40:38.159: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-projected-secrets-e420b7e1-5204-46b1-b691-ae8f8736c8b2 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:40:38.175: INFO: Waiting for pod pod-projected-secrets-e420b7e1-5204-46b1-b691-ae8f8736c8b2 to disappear Jan 11 14:40:38.177: INFO: Pod pod-projected-secrets-e420b7e1-5204-46b1-b691-ae8f8736c8b2 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:38.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9752" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":97,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:26.905: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 �[1mSTEP�[0m: creating an pod Jan 11 14:40:27.015: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 11 14:40:27.401: INFO: stderr: "" Jan 11 14:40:27.401: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Waiting for log generator to start. Jan 11 14:40:27.401: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 11 14:40:27.401: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4960" to be "running and ready, or succeeded" Jan 11 14:40:27.410: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 8.319622ms Jan 11 14:40:29.414: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012691723s Jan 11 14:40:31.424: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02207772s Jan 11 14:40:33.562: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.160589459s Jan 11 14:40:33.563: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 11 14:40:33.564: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 11 14:40:33.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 logs logs-generator logs-generator' Jan 11 14:40:33.707: INFO: stderr: "" Jan 11 14:40:33.707: INFO: stdout: "I0111 14:40:31.250660 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/88v 504\nI0111 14:40:31.450827 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/57s 542\nI0111 14:40:31.650806 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/ww5 464\nI0111 14:40:31.850831 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mjd 406\nI0111 14:40:32.050773 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/q5z 420\nI0111 14:40:32.250791 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/72k 481\nI0111 14:40:32.450820 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/qlm 340\nI0111 14:40:32.650798 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/zzr 380\nI0111 14:40:32.850836 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/b77 550\nI0111 14:40:33.050799 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/2xg7 250\nI0111 14:40:33.250825 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/swq 359\nI0111 14:40:33.450671 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/plts 544\nI0111 14:40:33.650857 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/gbkg 400\n" �[1mSTEP�[0m: limiting log lines Jan 11 14:40:33.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 logs logs-generator logs-generator --tail=1' Jan 11 14:40:33.831: INFO: stderr: "" Jan 11 14:40:33.831: INFO: stdout: "I0111 14:40:33.650857 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/gbkg 400\n" Jan 11 14:40:33.831: INFO: got output "I0111 14:40:33.650857 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/gbkg 400\n" �[1mSTEP�[0m: limiting log bytes Jan 11 14:40:33.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 logs logs-generator logs-generator --limit-bytes=1' Jan 11 14:40:33.929: INFO: stderr: "" Jan 11 14:40:33.929: INFO: stdout: "I" Jan 11 14:40:33.929: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Jan 11 14:40:33.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 logs logs-generator logs-generator --tail=1 --timestamps' Jan 11 14:40:34.032: INFO: stderr: "" Jan 11 14:40:34.032: INFO: stdout: "2023-01-11T14:40:33.851104305Z I0111 14:40:33.850853 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/hqr2 415\n" Jan 11 14:40:34.032: INFO: got output "2023-01-11T14:40:33.851104305Z I0111 14:40:33.850853 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/hqr2 415\n" �[1mSTEP�[0m: restricting to a time range Jan 11 14:40:36.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 logs logs-generator logs-generator --since=1s' Jan 11 14:40:36.669: INFO: stderr: "" Jan 11 14:40:36.669: INFO: stdout: "I0111 14:40:35.850812 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/kw4s 477\nI0111 14:40:36.051763 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/4c8 305\nI0111 14:40:36.250757 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/qsl 280\nI0111 14:40:36.450765 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/8dx 328\nI0111 14:40:36.650822 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/mzx 434\n" Jan 11 14:40:36.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 logs logs-generator logs-generator --since=24h' Jan 11 14:40:36.784: INFO: stderr: "" Jan 11 14:40:36.784: INFO: stdout: "I0111 14:40:31.250660 1 logs_generator.go:76] 0 POST /api/v1/namespaces/default/pods/88v 504\nI0111 14:40:31.450827 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/57s 542\nI0111 14:40:31.650806 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/ww5 464\nI0111 14:40:31.850831 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/mjd 406\nI0111 14:40:32.050773 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/q5z 420\nI0111 14:40:32.250791 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/72k 481\nI0111 14:40:32.450820 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/qlm 340\nI0111 14:40:32.650798 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/zzr 380\nI0111 14:40:32.850836 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/b77 550\nI0111 14:40:33.050799 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/2xg7 250\nI0111 14:40:33.250825 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/swq 359\nI0111 14:40:33.450671 1 logs_generator.go:76] 11 POST /api/v1/namespaces/default/pods/plts 544\nI0111 14:40:33.650857 1 logs_generator.go:76] 12 GET /api/v1/namespaces/kube-system/pods/gbkg 400\nI0111 14:40:33.850853 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/hqr2 415\nI0111 14:40:34.050790 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/nph 490\nI0111 14:40:34.250811 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/rmlh 497\nI0111 14:40:34.450820 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/n5k4 592\nI0111 14:40:34.650849 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/default/pods/z9p 569\nI0111 14:40:34.850806 1 logs_generator.go:76] 18 POST /api/v1/namespaces/kube-system/pods/k44 512\nI0111 14:40:35.050792 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/nvnd 320\nI0111 14:40:35.250796 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/x8k 381\nI0111 14:40:35.450799 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/sn6 209\nI0111 14:40:35.650808 1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/9mmb 547\nI0111 14:40:35.850812 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/ns/pods/kw4s 477\nI0111 14:40:36.051763 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/default/pods/4c8 305\nI0111 14:40:36.250757 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/kube-system/pods/qsl 280\nI0111 14:40:36.450765 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/8dx 328\nI0111 14:40:36.650822 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/default/pods/mzx 434\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Jan 11 14:40:36.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4960 delete pod logs-generator' Jan 11 14:40:38.439: INFO: stderr: "" Jan 11 14:40:38.439: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:38.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4960" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":2,"skipped":6,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:38.198: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:40:38.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-448307e9-16a1-47eb-b93e-3e5ec8b85538" in namespace "projected-8160" to be "Succeeded or Failed" Jan 11 14:40:38.235: INFO: Pod "downwardapi-volume-448307e9-16a1-47eb-b93e-3e5ec8b85538": Phase="Pending", Reason="", readiness=false. Elapsed: 4.605156ms Jan 11 14:40:40.240: INFO: Pod "downwardapi-volume-448307e9-16a1-47eb-b93e-3e5ec8b85538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008880867s �[1mSTEP�[0m: Saw pod success Jan 11 14:40:40.240: INFO: Pod "downwardapi-volume-448307e9-16a1-47eb-b93e-3e5ec8b85538" satisfied condition "Succeeded or Failed" Jan 11 14:40:40.242: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod downwardapi-volume-448307e9-16a1-47eb-b93e-3e5ec8b85538 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:40:40.268: INFO: Waiting for pod downwardapi-volume-448307e9-16a1-47eb-b93e-3e5ec8b85538 to disappear Jan 11 14:40:40.271: INFO: Pod downwardapi-volume-448307e9-16a1-47eb-b93e-3e5ec8b85538 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:40.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8160" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":107,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:40.305: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: starting the proxy server Jan 11 14:40:40.336: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8910 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:40.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8910" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":5,"skipped":127,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:40.431: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:42.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-1537" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":6,"skipped":131,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:42.542: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-be92dd29-0a0a-42e5-871c-edfad2437410 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:40:42.589: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-171b9be9-0013-409e-b066-e5a3af1d7cf8" in namespace "projected-7408" to be "Succeeded or Failed" Jan 11 14:40:42.594: INFO: Pod "pod-projected-secrets-171b9be9-0013-409e-b066-e5a3af1d7cf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.168076ms Jan 11 14:40:44.597: INFO: Pod "pod-projected-secrets-171b9be9-0013-409e-b066-e5a3af1d7cf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00742637s �[1mSTEP�[0m: Saw pod success Jan 11 14:40:44.597: INFO: Pod "pod-projected-secrets-171b9be9-0013-409e-b066-e5a3af1d7cf8" satisfied condition "Succeeded or Failed" Jan 11 14:40:44.599: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod pod-projected-secrets-171b9be9-0013-409e-b066-e5a3af1d7cf8 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:40:44.616: INFO: Waiting for pod pod-projected-secrets-171b9be9-0013-409e-b066-e5a3af1d7cf8 to disappear Jan 11 14:40:44.618: INFO: Pod pod-projected-secrets-171b9be9-0013-409e-b066-e5a3af1d7cf8 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:44.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7408" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":152,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:44.648: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 11 14:40:44.678: INFO: Waiting up to 5m0s for pod "pod-987405c1-f88d-43c4-bf17-1f0cc12f5933" in namespace "emptydir-284" to be "Succeeded or Failed" Jan 11 14:40:44.681: INFO: Pod "pod-987405c1-f88d-43c4-bf17-1f0cc12f5933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.225351ms Jan 11 14:40:46.685: INFO: Pod "pod-987405c1-f88d-43c4-bf17-1f0cc12f5933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006304001s �[1mSTEP�[0m: Saw pod success Jan 11 14:40:46.685: INFO: Pod "pod-987405c1-f88d-43c4-bf17-1f0cc12f5933" satisfied condition "Succeeded or Failed" Jan 11 14:40:46.687: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod pod-987405c1-f88d-43c4-bf17-1f0cc12f5933 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:40:46.702: INFO: Waiting for pod pod-987405c1-f88d-43c4-bf17-1f0cc12f5933 to disappear Jan 11 14:40:46.704: INFO: Pod pod-987405c1-f88d-43c4-bf17-1f0cc12f5933 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:40:46.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-284" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":167,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:26.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services Jan 11 14:40:26.862: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-9292 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-9292 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-9292 I0111 14:40:26.886839 18 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-9292, replica count: 3 I0111 14:40:29.937277 18 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 14:40:32.937548 18 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:40:32.943: INFO: Creating new exec pod Jan 11 14:40:35.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9292 exec execpod-affinitynlwzc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 11 14:40:36.183: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 11 14:40:36.184: INFO: stdout: "" Jan 11 14:40:36.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9292 exec execpod-affinitynlwzc -- /bin/sh -x -c nc -zv -t -w 2 10.129.214.8 80' Jan 11 14:40:36.372: INFO: stderr: "+ nc -zv -t -w 2 10.129.214.8 80\nConnection to 10.129.214.8 80 port [tcp/http] succeeded!\n" Jan 11 14:40:36.372: INFO: stdout: "" Jan 11 14:40:36.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9292 exec execpod-affinitynlwzc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.129.214.8:80/ ; done' Jan 11 14:40:36.698: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n" Jan 11 14:40:36.698: INFO: stdout: "\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x" Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:40:36.698: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9292 exec execpod-affinitynlwzc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.129.214.8:80/ ; done' Jan 11 14:41:06.942: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n" Jan 11 14:41:06.942: INFO: stdout: "\naffinity-clusterip-transition-nqgsf\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kj85w\naffinity-clusterip-transition-nqgsf\naffinity-clusterip-transition-kj85w\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kj85w\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-nqgsf\naffinity-clusterip-transition-kj85w\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kj85w\naffinity-clusterip-transition-kj85w\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x" Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-nqgsf Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kj85w Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-nqgsf Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kj85w Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kj85w Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-nqgsf Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kj85w Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kj85w Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kj85w Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.942: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:06.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9292 exec execpod-affinitynlwzc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.129.214.8:80/ ; done' Jan 11 14:41:07.225: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.129.214.8:80/\n" Jan 11 14:41:07.225: INFO: stdout: "\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x\naffinity-clusterip-transition-kbl7x" Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Received response from host: affinity-clusterip-transition-kbl7x Jan 11 14:41:07.225: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-9292, will wait for the garbage collector to delete the pods Jan 11 14:41:07.296: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.862869ms Jan 11 14:41:07.796: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 500.364275ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:20.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9292" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":25,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:20.384: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: validating cluster-info Jan 11 14:41:20.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1409 cluster-info' Jan 11 14:41:20.657: INFO: stderr: "" Jan 11 14:41:20.657: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:20.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1409" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:35.166: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 11 14:40:35.730: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:40:35.743: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 11 14:40:37.752: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044835, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044835, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044835, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809044835, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:40:40.768: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API Jan 11 14:40:50.785: INFO: Waiting for webhook configuration to be ready... Jan 11 14:41:00.895: INFO: Waiting for webhook configuration to be ready... Jan 11 14:41:10.999: INFO: Waiting for webhook configuration to be ready... Jan 11 14:41:21.099: INFO: Waiting for webhook configuration to be ready... Jan 11 14:41:31.110: INFO: Waiting for webhook configuration to be ready... Jan 11 14:41:31.110: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ee1f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerValidatingWebhookForWebhookConfigurations(0xc001068c60, 0xc0022434e0, 0x14, 0xc0009f07d0, 0x20fb, 0xc0022434e0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1360 +0x7ca k8s.io/kubernetes/test/e2e/apimachinery.glob..func23.10() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:274 +0xb2 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003202300, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:31.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5727" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5727-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[91m�[1m• Failure [56.001 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 11 14:41:31.110: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002ee1f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1360 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":1,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:31.171: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:41:31.748: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:41:34.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:34.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2310" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2310-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:38.491: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a watch on configmaps with label A �[1mSTEP�[0m: creating a watch on configmaps with label B �[1mSTEP�[0m: creating a watch on configmaps with label A or B �[1mSTEP�[0m: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 11 14:40:38.523: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2691 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 14:40:38.523: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2691 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A and ensuring the correct watchers observe the notification Jan 11 14:40:48.531: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2881 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 14:40:48.531: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2881 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A again and ensuring the correct watchers observe the notification Jan 11 14:40:58.538: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2920 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 14:40:58.539: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2920 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap A and ensuring the correct watchers observe the notification Jan 11 14:41:08.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2958 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 14:41:08.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-6970 5152233f-4793-408d-a9f6-1a55228c33c9 2958 0 2023-01-11 14:40:38 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-11 14:40:48 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 11 14:41:18.559: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6970 898aaef7-760b-4ad0-ab05-ac2ea0cf3044 2986 0 2023-01-11 14:41:18 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-11 14:41:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 14:41:18.559: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6970 898aaef7-760b-4ad0-ab05-ac2ea0cf3044 2986 0 2023-01-11 14:41:18 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-11 14:41:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap B and ensuring the correct watchers observe the notification Jan 11 14:41:28.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6970 898aaef7-760b-4ad0-ab05-ac2ea0cf3044 3216 0 2023-01-11 14:41:18 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-11 14:41:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 14:41:28.566: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-6970 898aaef7-760b-4ad0-ab05-ac2ea0cf3044 3216 0 2023-01-11 14:41:18 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-11 14:41:18 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:38.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-6970" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":3,"skipped":41,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:38.594: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 11 14:41:39.305: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 11 14:41:39.315: INFO: waiting for watch events with expected annotations Jan 11 14:41:39.315: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:39.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-1952" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":4,"skipped":53,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:39.386: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test override arguments Jan 11 14:41:39.419: INFO: Waiting up to 5m0s for pod "client-containers-ad170554-ce16-49f5-a70b-4b04a2ee5aff" in namespace "containers-664" to be "Succeeded or Failed" Jan 11 14:41:39.422: INFO: Pod "client-containers-ad170554-ce16-49f5-a70b-4b04a2ee5aff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.398025ms Jan 11 14:41:41.426: INFO: Pod "client-containers-ad170554-ce16-49f5-a70b-4b04a2ee5aff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006357356s �[1mSTEP�[0m: Saw pod success Jan 11 14:41:41.426: INFO: Pod "client-containers-ad170554-ce16-49f5-a70b-4b04a2ee5aff" satisfied condition "Succeeded or Failed" Jan 11 14:41:41.429: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod client-containers-ad170554-ce16-49f5-a70b-4b04a2ee5aff container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:41:41.444: INFO: Waiting for pod client-containers-ad170554-ce16-49f5-a70b-4b04a2ee5aff to disappear Jan 11 14:41:41.447: INFO: Pod client-containers-ad170554-ce16-49f5-a70b-4b04a2ee5aff no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:41.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-664" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":2,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:34.894: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating the pod Jan 11 14:41:37.455: INFO: Successfully updated pod "annotationupdateb51aeca4-df7a-44fe-a40f-c16ae4853fc7" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:41.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4354" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:41.485: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:41:41.521: INFO: Waiting up to 5m0s for pod "busybox-user-65534-70fe6335-40f9-4215-bb0c-9b46c18af086" in namespace "security-context-test-9950" to be "Succeeded or Failed" Jan 11 14:41:41.524: INFO: Pod "busybox-user-65534-70fe6335-40f9-4215-bb0c-9b46c18af086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054057ms Jan 11 14:41:43.527: INFO: Pod "busybox-user-65534-70fe6335-40f9-4215-bb0c-9b46c18af086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005066306s Jan 11 14:41:43.527: INFO: Pod "busybox-user-65534-70fe6335-40f9-4215-bb0c-9b46c18af086" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:43.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-9950" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":47,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:41.473: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-fd9905d9-3879-4b6d-906e-9f1d8dd3d7ef �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:41:41.517: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1dbb76a6-67c3-44fd-8109-236ed13f37c6" in namespace "projected-9447" to be "Succeeded or Failed" Jan 11 14:41:41.520: INFO: Pod "pod-projected-secrets-1dbb76a6-67c3-44fd-8109-236ed13f37c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.535008ms Jan 11 14:41:43.523: INFO: Pod "pod-projected-secrets-1dbb76a6-67c3-44fd-8109-236ed13f37c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005726213s �[1mSTEP�[0m: Saw pod success Jan 11 14:41:43.523: INFO: Pod "pod-projected-secrets-1dbb76a6-67c3-44fd-8109-236ed13f37c6" satisfied condition "Succeeded or Failed" Jan 11 14:41:43.526: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-projected-secrets-1dbb76a6-67c3-44fd-8109-236ed13f37c6 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:41:43.540: INFO: Waiting for pod pod-projected-secrets-1dbb76a6-67c3-44fd-8109-236ed13f37c6 to disappear Jan 11 14:41:43.543: INFO: Pod pod-projected-secrets-1dbb76a6-67c3-44fd-8109-236ed13f37c6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:43.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9447" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":75,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:43.573: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating an Endpoint �[1mSTEP�[0m: waiting for available Endpoint �[1mSTEP�[0m: listing all Endpoints �[1mSTEP�[0m: updating the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: patching the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: deleting the Endpoint by Collection �[1mSTEP�[0m: waiting for Endpoint deletion �[1mSTEP�[0m: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:43.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-4858" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":7,"skipped":86,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:43.546: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Starting the proxy Jan 11 14:41:43.577: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3909 proxy --unix-socket=/tmp/kubectl-proxy-unix271151380/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:43.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3909" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":5,"skipped":50,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:43.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:41:43.705: INFO: Creating deployment "test-recreate-deployment" Jan 11 14:41:43.711: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 11 14:41:43.732: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 11 14:41:45.739: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 11 14:41:45.741: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 11 14:41:45.748: INFO: Updating deployment test-recreate-deployment Jan 11 14:41:45.748: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 14:41:45.861: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-9985 9f275b73-23f9-4071-9225-adda110d1ea0 3512 2 2023-01-11 14:41:43 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-11 14:41:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-11 14:41:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0034cfa48 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-11 14:41:45 +0000 UTC,LastTransitionTime:2023-01-11 14:41:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2023-01-11 14:41:45 +0000 UTC,LastTransitionTime:2023-01-11 14:41:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 11 14:41:45.868: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-9985 6a31d8db-363e-47be-ba0a-d03b8d7a6c52 3510 1 2023-01-11 14:41:45 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 9f275b73-23f9-4071-9225-adda110d1ea0 0xc00237e2e0 0xc00237e2e1}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:41:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f275b73-23f9-4071-9225-adda110d1ea0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00237e358 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:41:45.868: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 11 14:41:45.868: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-9985 3a56b680-7b61-41d5-a460-ff954597f04d 3501 2 2023-01-11 14:41:43 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 9f275b73-23f9-4071-9225-adda110d1ea0 0xc00237e1f7 0xc00237e1f8}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:41:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9f275b73-23f9-4071-9225-adda110d1ea0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00237e288 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:41:45.874: INFO: Pod "test-recreate-deployment-f79dd4667-clxsk" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-clxsk test-recreate-deployment-f79dd4667- deployment-9985 d9d1a58f-e5aa-4b01-9ade-fe30848d6435 3513 0 2023-01-11 14:41:45 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 6a31d8db-363e-47be-ba0a-d03b8d7a6c52 0xc00237e7b0 0xc00237e7b1}] [] [{kube-controller-manager Update v1 2023-01-11 14:41:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6a31d8db-363e-47be-ba0a-d03b8d7a6c52\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:41:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-mm9tj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-mm9tj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-mm9tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-cvzb96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:41:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:41:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:41:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-11 14:41:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:45.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9985" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:43.747: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption is created �[1mSTEP�[0m: When a replication controller with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:50.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-2334" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":6,"skipped":83,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:50.843: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:50.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-2106" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":7,"skipped":93,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:50.924: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:41:50.974: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8fc039a0-1702-48a7-be2a-4c3872113779" in namespace "downward-api-9240" to be "Succeeded or Failed" Jan 11 14:41:50.990: INFO: Pod "downwardapi-volume-8fc039a0-1702-48a7-be2a-4c3872113779": Phase="Pending", Reason="", readiness=false. Elapsed: 16.07073ms Jan 11 14:41:52.994: INFO: Pod "downwardapi-volume-8fc039a0-1702-48a7-be2a-4c3872113779": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.019752157s �[1mSTEP�[0m: Saw pod success Jan 11 14:41:52.994: INFO: Pod "downwardapi-volume-8fc039a0-1702-48a7-be2a-4c3872113779" satisfied condition "Succeeded or Failed" Jan 11 14:41:52.997: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod downwardapi-volume-8fc039a0-1702-48a7-be2a-4c3872113779 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:41:53.012: INFO: Waiting for pod downwardapi-volume-8fc039a0-1702-48a7-be2a-4c3872113779 to disappear Jan 11 14:41:53.015: INFO: Pod downwardapi-volume-8fc039a0-1702-48a7-be2a-4c3872113779 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:41:53.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9240" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":93,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:53.031: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-3542 Jan 11 14:41:55.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 11 14:41:55.263: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 11 14:41:55.263: INFO: stdout: "iptables" Jan 11 14:41:55.263: INFO: proxyMode: iptables Jan 11 14:41:55.272: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 11 14:41:55.274: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-3542 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-3542 I0111 14:41:55.301560 19 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-3542, replica count: 3 I0111 14:41:58.351907 19 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:41:58.361: INFO: Creating new exec pod Jan 11 14:42:01.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec execpod-affinityt4vvn -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jan 11 14:42:01.568: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jan 11 14:42:01.568: INFO: stdout: "" Jan 11 14:42:01.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec execpod-affinityt4vvn -- /bin/sh -x -c nc -zv -t -w 2 10.140.59.73 80' Jan 11 14:42:01.742: INFO: stderr: "+ nc -zv -t -w 2 10.140.59.73 80\nConnection to 10.140.59.73 80 port [tcp/http] succeeded!\n" Jan 11 14:42:01.742: INFO: stdout: "" Jan 11 14:42:01.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec execpod-affinityt4vvn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31608' Jan 11 14:42:01.948: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 31608\nConnection to 172.18.0.5 31608 port [tcp/31608] succeeded!\n" Jan 11 14:42:01.948: INFO: stdout: "" Jan 11 14:42:01.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec execpod-affinityt4vvn -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 31608' Jan 11 14:42:02.134: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 31608\nConnection to 172.18.0.4 31608 port [tcp/31608] succeeded!\n" Jan 11 14:42:02.134: INFO: stdout: "" Jan 11 14:42:02.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec execpod-affinityt4vvn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31608/ ; done' Jan 11 14:42:02.391: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n" Jan 11 14:42:02.392: INFO: stdout: "\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2\naffinity-nodeport-timeout-xvjs2" Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Received response from host: affinity-nodeport-timeout-xvjs2 Jan 11 14:42:02.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec execpod-affinityt4vvn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31608/' Jan 11 14:42:02.559: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n" Jan 11 14:42:02.559: INFO: stdout: "affinity-nodeport-timeout-xvjs2" Jan 11 14:42:22.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3542 exec execpod-affinityt4vvn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31608/' Jan 11 14:43:12.757: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31608/\n" Jan 11 14:43:12.757: INFO: stdout: "" Jan 11 14:43:12.757: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-3542, will wait for the garbage collector to delete the pods Jan 11 14:43:12.826: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 7.369606ms Jan 11 14:43:13.326: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 500.185873ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:20.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3542" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":9,"skipped":99,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:20.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 11 14:43:20.438: INFO: Waiting up to 5m0s for pod "pod-3cb8278a-fb56-4779-9ec0-7515af9a7653" in namespace "emptydir-5350" to be "Succeeded or Failed" Jan 11 14:43:20.448: INFO: Pod "pod-3cb8278a-fb56-4779-9ec0-7515af9a7653": Phase="Pending", Reason="", readiness=false. Elapsed: 9.765966ms Jan 11 14:43:22.451: INFO: Pod "pod-3cb8278a-fb56-4779-9ec0-7515af9a7653": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013575888s �[1mSTEP�[0m: Saw pod success Jan 11 14:43:22.451: INFO: Pod "pod-3cb8278a-fb56-4779-9ec0-7515af9a7653" satisfied condition "Succeeded or Failed" Jan 11 14:43:22.454: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-3cb8278a-fb56-4779-9ec0-7515af9a7653 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:43:22.470: INFO: Waiting for pod pod-3cb8278a-fb56-4779-9ec0-7515af9a7653 to disappear Jan 11 14:43:22.472: INFO: Pod pod-3cb8278a-fb56-4779-9ec0-7515af9a7653 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:22.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5350" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":99,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:22.507: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 11 14:43:22.538: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:43:24.848: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:34.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-1366" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":11,"skipped":116,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:34.588: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-989179b2-43cc-4a4f-8e7e-2d323ea5add0 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-9a3313f3-bfe3-41f4-b27b-b4f881474227 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-989179b2-43cc-4a4f-8e7e-2d323ea5add0 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-9a3313f3-bfe3-41f4-b27b-b4f881474227 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-c44612d1-8d6b-4584-a1f7-18953507815c �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:38.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3126" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":116,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:38.730: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 11 14:43:38.770: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-3781 c2f03696-c33b-4ca7-9bf8-0b24bb74b782 4106 0 2023-01-11 14:43:38 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2023-01-11 14:43:38 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wvtql,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wvtql,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wvtql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:43:38.777: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 11 14:43:40.780: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Jan 11 14:43:40.780: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3781 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:43:40.780: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Jan 11 14:43:40.876: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3781 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:43:40.876: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:43:40.981: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:40.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-3781" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":13,"skipped":136,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:41.150: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:43:41.193: INFO: Waiting up to 5m0s for pod "downwardapi-volume-062e33b7-fc2a-4c9c-871d-e2aa6d46ce7b" in namespace "projected-3875" to be "Succeeded or Failed" Jan 11 14:43:41.196: INFO: Pod "downwardapi-volume-062e33b7-fc2a-4c9c-871d-e2aa6d46ce7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.683462ms Jan 11 14:43:43.200: INFO: Pod "downwardapi-volume-062e33b7-fc2a-4c9c-871d-e2aa6d46ce7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006232925s �[1mSTEP�[0m: Saw pod success Jan 11 14:43:43.200: INFO: Pod "downwardapi-volume-062e33b7-fc2a-4c9c-871d-e2aa6d46ce7b" satisfied condition "Succeeded or Failed" Jan 11 14:43:43.202: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod downwardapi-volume-062e33b7-fc2a-4c9c-871d-e2aa6d46ce7b container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:43:43.222: INFO: Waiting for pod downwardapi-volume-062e33b7-fc2a-4c9c-871d-e2aa6d46ce7b to disappear Jan 11 14:43:43.225: INFO: Pod downwardapi-volume-062e33b7-fc2a-4c9c-871d-e2aa6d46ce7b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:43.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3875" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":223,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:43.241: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:43.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1359" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":15,"skipped":229,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:43.311: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 11 14:43:43.341: INFO: Waiting up to 5m0s for pod "pod-448db672-81cc-48e8-a82f-64994e012b61" in namespace "emptydir-9441" to be "Succeeded or Failed" Jan 11 14:43:43.344: INFO: Pod "pod-448db672-81cc-48e8-a82f-64994e012b61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.937822ms Jan 11 14:43:45.347: INFO: Pod "pod-448db672-81cc-48e8-a82f-64994e012b61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006318873s �[1mSTEP�[0m: Saw pod success Jan 11 14:43:45.347: INFO: Pod "pod-448db672-81cc-48e8-a82f-64994e012b61" satisfied condition "Succeeded or Failed" Jan 11 14:43:45.349: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod pod-448db672-81cc-48e8-a82f-64994e012b61 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:43:45.372: INFO: Waiting for pod pod-448db672-81cc-48e8-a82f-64994e012b61 to disappear Jan 11 14:43:45.374: INFO: Pod pod-448db672-81cc-48e8-a82f-64994e012b61 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:45.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-9441" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":234,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:45.423: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/node.k8s.io �[1mSTEP�[0m: getting /apis/node.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: watching Jan 11 14:43:45.466: INFO: starting watch �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 11 14:43:45.482: INFO: waiting for watch events with expected annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:45.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-144" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":17,"skipped":266,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:45.615: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:43:46.149: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:43:49.167: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API �[1mSTEP�[0m: create a namespace for the webhook �[1mSTEP�[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:49.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8888" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8888-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":18,"skipped":346,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:49.355: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-ab737ea8-8d86-48fa-a868-b4124bb2a21b �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 14:43:49.391: INFO: Waiting up to 5m0s for pod "pod-configmaps-f3dc72cd-01fd-4176-b6ff-fd36a2044c3d" in namespace "configmap-4210" to be "Succeeded or Failed" Jan 11 14:43:49.394: INFO: Pod "pod-configmaps-f3dc72cd-01fd-4176-b6ff-fd36a2044c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.755725ms Jan 11 14:43:51.401: INFO: Pod "pod-configmaps-f3dc72cd-01fd-4176-b6ff-fd36a2044c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010082104s �[1mSTEP�[0m: Saw pod success Jan 11 14:43:51.401: INFO: Pod "pod-configmaps-f3dc72cd-01fd-4176-b6ff-fd36a2044c3d" satisfied condition "Succeeded or Failed" Jan 11 14:43:51.404: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-configmaps-f3dc72cd-01fd-4176-b6ff-fd36a2044c3d container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:43:51.417: INFO: Waiting for pod pod-configmaps-f3dc72cd-01fd-4176-b6ff-fd36a2044c3d to disappear Jan 11 14:43:51.420: INFO: Pod pod-configmaps-f3dc72cd-01fd-4176-b6ff-fd36a2044c3d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:51.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4210" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":377,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:51.433: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-83342ffd-d4e7-4e73-b416-c78d9c9fc448 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-d1be1441-5740-439a-89bc-d3d1e6d695e4 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-83342ffd-d4e7-4e73-b416-c78d9c9fc448 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-d1be1441-5740-439a-89bc-d3d1e6d695e4 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-068b0533-1f2c-40b0-b938-bcbae0515e5f �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:43:55.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3138" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":379,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":8,"skipped":88,"failed":0} [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:45.890: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf in namespace container-probe-7170 Jan 11 14:41:47.934: INFO: Started pod liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf in namespace container-probe-7170 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 14:41:47.937: INFO: Initial restart count of pod liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf is 0 Jan 11 14:41:59.969: INFO: Restart count of pod container-probe-7170/liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf is now 1 (12.032279695s elapsed) Jan 11 14:42:20.008: INFO: Restart count of pod container-probe-7170/liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf is now 2 (32.07134925s elapsed) Jan 11 14:42:40.045: INFO: Restart count of pod container-probe-7170/liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf is now 3 (52.108622641s elapsed) Jan 11 14:43:00.080: INFO: Restart count of pod container-probe-7170/liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf is now 4 (1m12.14374626s elapsed) Jan 11 14:44:04.210: INFO: Restart count of pod container-probe-7170/liveness-8eca00e9-9145-4d43-9e6f-84716b32b5bf is now 5 (2m16.273078742s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:04.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7170" for this suite. �[32m• [SLOW TEST:138.337 seconds]�[0m [k8s.io] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should have monotonically increasing restart count [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":88,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:43:55.635: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-krsl �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 11 14:43:55.671: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-krsl" in namespace "subpath-2136" to be "Succeeded or Failed" Jan 11 14:43:55.673: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191257ms Jan 11 14:43:57.677: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 2.005918291s Jan 11 14:43:59.681: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 4.009754388s Jan 11 14:44:01.685: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 6.01338001s Jan 11 14:44:03.688: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 8.017027869s Jan 11 14:44:05.692: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 10.02110082s Jan 11 14:44:07.698: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 12.026922344s Jan 11 14:44:09.702: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 14.030475456s Jan 11 14:44:11.705: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 16.033989334s Jan 11 14:44:13.709: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 18.037547573s Jan 11 14:44:15.712: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Running", Reason="", readiness=true. Elapsed: 20.041189181s Jan 11 14:44:17.716: INFO: Pod "pod-subpath-test-projected-krsl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.044556955s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:17.716: INFO: Pod "pod-subpath-test-projected-krsl" satisfied condition "Succeeded or Failed" Jan 11 14:44:17.719: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-subpath-test-projected-krsl container test-container-subpath-projected-krsl: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:17.732: INFO: Waiting for pod pod-subpath-test-projected-krsl to disappear Jan 11 14:44:17.735: INFO: Pod pod-subpath-test-projected-krsl no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-krsl Jan 11 14:44:17.735: INFO: Deleting pod "pod-subpath-test-projected-krsl" in namespace "subpath-2136" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:17.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-2136" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":431,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:17.757: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:44:17.792: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c323dd4-55e0-46ad-a88b-33d2d49bc76d" in namespace "projected-2035" to be "Succeeded or Failed" Jan 11 14:44:17.795: INFO: Pod "downwardapi-volume-1c323dd4-55e0-46ad-a88b-33d2d49bc76d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.237097ms Jan 11 14:44:19.799: INFO: Pod "downwardapi-volume-1c323dd4-55e0-46ad-a88b-33d2d49bc76d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007124281s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:19.799: INFO: Pod "downwardapi-volume-1c323dd4-55e0-46ad-a88b-33d2d49bc76d" satisfied condition "Succeeded or Failed" Jan 11 14:44:19.803: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod downwardapi-volume-1c323dd4-55e0-46ad-a88b-33d2d49bc76d container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:19.825: INFO: Waiting for pod downwardapi-volume-1c323dd4-55e0-46ad-a88b-33d2d49bc76d to disappear Jan 11 14:44:19.827: INFO: Pod downwardapi-volume-1c323dd4-55e0-46ad-a88b-33d2d49bc76d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:19.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2035" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":438,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:19.866: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-a50f878d-31a3-4825-b8a4-72886d4a55d5 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-c1d82c20-8b07-46d1-8435-134685b3ed3b �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Deleting secret s-test-opt-del-a50f878d-31a3-4825-b8a4-72886d4a55d5 �[1mSTEP�[0m: Updating secret s-test-opt-upd-c1d82c20-8b07-46d1-8435-134685b3ed3b �[1mSTEP�[0m: Creating secret with name s-test-opt-create-b55fdb2d-332c-4bf6-acea-c07644dbc552 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:23.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3981" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":462,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:23.991: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 11 14:44:26.546: INFO: Successfully updated pod "pod-update-93876614-d2c0-467e-aa28-120b2ee7db3e" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Jan 11 14:44:26.551: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:26.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7957" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":469,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:26.562: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:44:27.057: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 11 14:44:29.066: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045067, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045067, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045067, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045067, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:44:32.085: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:32.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-137" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-137-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":25,"skipped":470,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:32.334: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Jan 11 14:44:32.371: INFO: Waiting up to 5m0s for pod "pod-fbdb09f0-1cfb-41f6-bd88-6bff545147f2" in namespace "emptydir-7311" to be "Succeeded or Failed" Jan 11 14:44:32.375: INFO: Pod "pod-fbdb09f0-1cfb-41f6-bd88-6bff545147f2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.384491ms Jan 11 14:44:34.379: INFO: Pod "pod-fbdb09f0-1cfb-41f6-bd88-6bff545147f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006657419s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:34.379: INFO: Pod "pod-fbdb09f0-1cfb-41f6-bd88-6bff545147f2" satisfied condition "Succeeded or Failed" Jan 11 14:44:34.382: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-fbdb09f0-1cfb-41f6-bd88-6bff545147f2 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:34.395: INFO: Waiting for pod pod-fbdb09f0-1cfb-41f6-bd88-6bff545147f2 to disappear Jan 11 14:44:34.398: INFO: Pod pod-fbdb09f0-1cfb-41f6-bd88-6bff545147f2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:34.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7311" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":481,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:34.452: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 11 14:44:34.497: INFO: Waiting up to 5m0s for pod "pod-a4aa16b5-d7cb-4b72-94cb-d8ad7710ee4a" in namespace "emptydir-1371" to be "Succeeded or Failed" Jan 11 14:44:34.500: INFO: Pod "pod-a4aa16b5-d7cb-4b72-94cb-d8ad7710ee4a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.803973ms Jan 11 14:44:36.512: INFO: Pod "pod-a4aa16b5-d7cb-4b72-94cb-d8ad7710ee4a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01422386s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:36.512: INFO: Pod "pod-a4aa16b5-d7cb-4b72-94cb-d8ad7710ee4a" satisfied condition "Succeeded or Failed" Jan 11 14:44:36.515: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-a4aa16b5-d7cb-4b72-94cb-d8ad7710ee4a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:36.529: INFO: Waiting for pod pod-a4aa16b5-d7cb-4b72-94cb-d8ad7710ee4a to disappear Jan 11 14:44:36.531: INFO: Pod pod-a4aa16b5-d7cb-4b72-94cb-d8ad7710ee4a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:36.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1371" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":506,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:36.575: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-15d4f943-f14d-4edd-b25c-c43fef4a92d3 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:44:36.615: INFO: Waiting up to 5m0s for pod "pod-secrets-39a15675-2fa0-4d51-bd82-f81c64eeb483" in namespace "secrets-6463" to be "Succeeded or Failed" Jan 11 14:44:36.618: INFO: Pod "pod-secrets-39a15675-2fa0-4d51-bd82-f81c64eeb483": Phase="Pending", Reason="", readiness=false. Elapsed: 2.452647ms Jan 11 14:44:38.621: INFO: Pod "pod-secrets-39a15675-2fa0-4d51-bd82-f81c64eeb483": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005709654s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:38.621: INFO: Pod "pod-secrets-39a15675-2fa0-4d51-bd82-f81c64eeb483" satisfied condition "Succeeded or Failed" Jan 11 14:44:38.623: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-secrets-39a15675-2fa0-4d51-bd82-f81c64eeb483 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:38.639: INFO: Waiting for pod pod-secrets-39a15675-2fa0-4d51-bd82-f81c64eeb483 to disappear Jan 11 14:44:38.641: INFO: Pod pod-secrets-39a15675-2fa0-4d51-bd82-f81c64eeb483 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:38.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6463" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":530,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:38.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-b3aa9db7-d6c7-4771-8254-a1160fe98bea �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:44:38.736: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d32802e-3921-481c-82e3-10d4d1b54dec" in namespace "projected-5336" to be "Succeeded or Failed" Jan 11 14:44:38.739: INFO: Pod "pod-projected-secrets-6d32802e-3921-481c-82e3-10d4d1b54dec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096586ms Jan 11 14:44:40.742: INFO: Pod "pod-projected-secrets-6d32802e-3921-481c-82e3-10d4d1b54dec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005404427s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:40.742: INFO: Pod "pod-projected-secrets-6d32802e-3921-481c-82e3-10d4d1b54dec" satisfied condition "Succeeded or Failed" Jan 11 14:44:40.745: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-projected-secrets-6d32802e-3921-481c-82e3-10d4d1b54dec container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:40.762: INFO: Waiting for pod pod-projected-secrets-6d32802e-3921-481c-82e3-10d4d1b54dec to disappear Jan 11 14:44:40.765: INFO: Pod pod-projected-secrets-6d32802e-3921-481c-82e3-10d4d1b54dec no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:40.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5336" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":574,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:40.785: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:44:40.816: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e43c149d-ca6d-42ce-954a-f9d6862b29b7" in namespace "downward-api-1716" to be "Succeeded or Failed" Jan 11 14:44:40.818: INFO: Pod "downwardapi-volume-e43c149d-ca6d-42ce-954a-f9d6862b29b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118386ms Jan 11 14:44:42.822: INFO: Pod "downwardapi-volume-e43c149d-ca6d-42ce-954a-f9d6862b29b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006046496s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:42.822: INFO: Pod "downwardapi-volume-e43c149d-ca6d-42ce-954a-f9d6862b29b7" satisfied condition "Succeeded or Failed" Jan 11 14:44:42.825: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod downwardapi-volume-e43c149d-ca6d-42ce-954a-f9d6862b29b7 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:42.842: INFO: Waiting for pod downwardapi-volume-e43c149d-ca6d-42ce-954a-f9d6862b29b7 to disappear Jan 11 14:44:42.844: INFO: Pod downwardapi-volume-e43c149d-ca6d-42ce-954a-f9d6862b29b7 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:42.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1716" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":584,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:42.884: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Jan 11 14:44:42.934: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8911 f4b3c1ff-1cb6-4e9f-81b3-30eaf1f56ab4 4932 0 2023-01-11 14:44:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-11 14:44:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 11 14:44:42.934: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-8911 f4b3c1ff-1cb6-4e9f-81b3-30eaf1f56ab4 4933 0 2023-01-11 14:44:42 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-11 14:44:42 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:42.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-8911" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":31,"skipped":610,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:04.240: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-9638 Jan 11 14:44:06.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9638 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 11 14:44:06.483: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 11 14:44:06.483: INFO: stdout: "iptables" Jan 11 14:44:06.483: INFO: proxyMode: iptables Jan 11 14:44:06.497: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 11 14:44:06.502: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-clusterip-timeout in namespace services-9638 �[1mSTEP�[0m: creating replication controller affinity-clusterip-timeout in namespace services-9638 I0111 14:44:06.523049 14 runners.go:190] Created replication controller with name: affinity-clusterip-timeout, namespace: services-9638, replica count: 3 I0111 14:44:09.574448 14 runners.go:190] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:44:09.580: INFO: Creating new exec pod Jan 11 14:44:12.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9638 exec execpod-affinityfppvw -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-timeout 80' Jan 11 14:44:12.809: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-timeout 80\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Jan 11 14:44:12.809: INFO: stdout: "" Jan 11 14:44:12.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9638 exec execpod-affinityfppvw -- /bin/sh -x -c nc -zv -t -w 2 10.143.47.109 80' Jan 11 14:44:12.986: INFO: stderr: "+ nc -zv -t -w 2 10.143.47.109 80\nConnection to 10.143.47.109 80 port [tcp/http] succeeded!\n" Jan 11 14:44:12.986: INFO: stdout: "" Jan 11 14:44:12.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9638 exec execpod-affinityfppvw -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.143.47.109:80/ ; done' Jan 11 14:44:13.227: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n" Jan 11 14:44:13.227: INFO: stdout: "\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc\naffinity-clusterip-timeout-f9jcc" Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Received response from host: affinity-clusterip-timeout-f9jcc Jan 11 14:44:13.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9638 exec execpod-affinityfppvw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.143.47.109:80/' Jan 11 14:44:13.395: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n" Jan 11 14:44:13.395: INFO: stdout: "affinity-clusterip-timeout-f9jcc" Jan 11 14:44:33.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9638 exec execpod-affinityfppvw -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.143.47.109:80/' Jan 11 14:44:33.585: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.143.47.109:80/\n" Jan 11 14:44:33.585: INFO: stdout: "affinity-clusterip-timeout-n97zp" Jan 11 14:44:33.585: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-timeout in namespace services-9638, will wait for the garbage collector to delete the pods Jan 11 14:44:33.655: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 5.727977ms Jan 11 14:44:33.756: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 100.259606ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:46.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9638" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":94,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:40:46.728: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod test-webserver-52b560d4-d910-477f-ae16-8d52eb1357a5 in namespace container-probe-1713 Jan 11 14:40:48.772: INFO: Started pod test-webserver-52b560d4-d910-477f-ae16-8d52eb1357a5 in namespace container-probe-1713 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 14:40:48.775: INFO: Initial restart count of pod test-webserver-52b560d4-d910-477f-ae16-8d52eb1357a5 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:49.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-1713" for this suite. �[32m• [SLOW TEST:242.540 seconds]�[0m [k8s.io] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":180,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:46.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:44:46.887: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Jan 11 14:44:48.922: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:49.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-7594" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":11,"skipped":127,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:49.955: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:44:49.996: INFO: Waiting up to 5m0s for pod "downwardapi-volume-21519301-9c48-4e5c-87b0-6813a54161ae" in namespace "projected-9733" to be "Succeeded or Failed" Jan 11 14:44:50.009: INFO: Pod "downwardapi-volume-21519301-9c48-4e5c-87b0-6813a54161ae": Phase="Pending", Reason="", readiness=false. Elapsed: 11.847451ms Jan 11 14:44:52.011: INFO: Pod "downwardapi-volume-21519301-9c48-4e5c-87b0-6813a54161ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014580679s �[1mSTEP�[0m: Saw pod success Jan 11 14:44:52.011: INFO: Pod "downwardapi-volume-21519301-9c48-4e5c-87b0-6813a54161ae" satisfied condition "Succeeded or Failed" Jan 11 14:44:52.015: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod downwardapi-volume-21519301-9c48-4e5c-87b0-6813a54161ae container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:44:52.034: INFO: Waiting for pod downwardapi-volume-21519301-9c48-4e5c-87b0-6813a54161ae to disappear Jan 11 14:44:52.036: INFO: Pod downwardapi-volume-21519301-9c48-4e5c-87b0-6813a54161ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:52.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9733" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":136,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:49.288: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:44:49.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5756 create -f -' Jan 11 14:44:50.152: INFO: stderr: "" Jan 11 14:44:50.152: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 11 14:44:50.152: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5756 create -f -' Jan 11 14:44:50.428: INFO: stderr: "" Jan 11 14:44:50.428: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 11 14:44:51.433: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 14:44:51.433: INFO: Found 0 / 1 Jan 11 14:44:52.432: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 14:44:52.432: INFO: Found 1 / 1 Jan 11 14:44:52.432: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 11 14:44:52.435: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 14:44:52.435: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 14:44:52.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5756 describe pod agnhost-primary-dpj7t' Jan 11 14:44:52.534: INFO: stderr: "" Jan 11 14:44:52.534: INFO: stdout: "Name: agnhost-primary-dpj7t\nNamespace: kubectl-5756\nPriority: 0\nNode: k8s-upgrade-and-conformance-dctc5v-worker-cvzb96/172.18.0.6\nStart Time: Wed, 11 Jan 2023 14:44:50 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.3.26\nIPs:\n IP: 192.168.3.26\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://755909b0456b7ab103c57992ffdd5c17fe5e5ecd338a847393be31b061b7f267\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 11 Jan 2023 14:44:51 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-986gl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-986gl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-986gl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-5756/agnhost-primary-dpj7t to k8s-upgrade-and-conformance-dctc5v-worker-cvzb96\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Jan 11 14:44:52.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5756 describe rc agnhost-primary' Jan 11 14:44:52.638: INFO: stderr: "" Jan 11 14:44:52.638: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5756\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-dpj7t\n" Jan 11 14:44:52.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5756 describe service agnhost-primary' Jan 11 14:44:52.737: INFO: stderr: "" Jan 11 14:44:52.737: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-5756\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: <none>\nIP: 10.142.148.71\nIPs: 10.142.148.71\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.3.26:6379\nSession Affinity: None\nEvents: <none>\n" Jan 11 14:44:52.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5756 describe node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz' Jan 11 14:44:52.862: INFO: stderr: "" Jan 11 14:44:52.862: INFO: stdout: "Name: k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz\nRoles: <none>\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz\n kubernetes.io/os=linux\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-dctc5v\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-0z48fg\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz\n cluster.x-k8s.io/owner-kind: MachineSet\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 11 Jan 2023 14:38:28 +0000\nTaints: <none>\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz\n AcquireTime: <unset>\n RenewTime: Wed, 11 Jan 2023 14:44:45 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 11 Jan 2023 14:41:29 +0000 Wed, 11 Jan 2023 14:38:28 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 11 Jan 2023 14:41:29 +0000 Wed, 11 Jan 2023 14:38:28 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 11 Jan 2023 14:41:29 +0000 Wed, 11 Jan 2023 14:38:28 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 11 Jan 2023 14:41:29 +0000 Wed, 11 Jan 2023 14:38:38 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz\nCapacity:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nSystem Info:\n Machine ID: e5199bee92e14aa98432a9b7e389c256\n System UUID: ce9e3122-9162-4bd7-ad9f-2ab5326e94e2\n Boot ID: 896c64ca-9718-4575-852d-b1088d7f44fd\n Kernel Version: 5.4.0-1081-gke\n OS Image: Ubuntu 22.04.1 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.9\n Kubelet Version: v1.20.15\n Kube-Proxy Version: v1.20.15\nPodCIDR: 192.168.1.0/24\nPodCIDRs: 192.168.1.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-5zvbh 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 6m14s\n kube-system kindnet-55r4c 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 6m24s\n kube-system kube-proxy-tjqbl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m24s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 200m (2%) 100m (1%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 6m24s kubelet Starting kubelet.\n Warning InvalidDiskCapacity 6m24s kubelet invalid capacity 0 on image filesystem\n Warning CheckLimitsForResolvConf 6m24s kubelet Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n Normal NodeHasSufficientMemory 6m24s (x2 over 6m24s) kubelet Node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz status is now: NodeHasSufficientMemory\n Normal NodeHasNoDiskPressure 6m24s (x2 over 6m24s) kubelet Node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 6m24s (x2 over 6m24s) kubelet Node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 6m24s kubelet Updated Node Allocatable limit across pods\n Normal Starting 6m21s kube-proxy Starting kube-proxy.\n Normal NodeReady 6m14s kubelet Node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz status is now: NodeReady\n" Jan 11 14:44:52.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5756 describe namespace kubectl-5756' Jan 11 14:44:52.963: INFO: stderr: "" Jan 11 14:44:52.963: INFO: stdout: "Name: kubectl-5756\nLabels: e2e-framework=kubectl\n e2e-run=6864bca8-56cf-4eaa-b871-c0817702acbf\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:52.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5756" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":10,"skipped":193,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:53.014: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:44:54.076: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:44:57.096: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:44:57.098: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:58.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-494" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":11,"skipped":223,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:52.061: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:44:52.085: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:44:58.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-7462" for this suite. �[32m•�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":13,"skipped":149,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:58.389: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-map-93bb12aa-7a00-401d-8dbd-2360970421ff �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:44:58.429: INFO: Waiting up to 5m0s for pod "pod-secrets-2f3f473f-c4f2-41b7-9f98-6a724e41a370" in namespace "secrets-2336" to be "Succeeded or Failed" Jan 11 14:44:58.433: INFO: Pod "pod-secrets-2f3f473f-c4f2-41b7-9f98-6a724e41a370": Phase="Pending", Reason="", readiness=false. Elapsed: 3.852364ms Jan 11 14:45:00.437: INFO: Pod "pod-secrets-2f3f473f-c4f2-41b7-9f98-6a724e41a370": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007979112s �[1mSTEP�[0m: Saw pod success Jan 11 14:45:00.437: INFO: Pod "pod-secrets-2f3f473f-c4f2-41b7-9f98-6a724e41a370" satisfied condition "Succeeded or Failed" Jan 11 14:45:00.440: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-secrets-2f3f473f-c4f2-41b7-9f98-6a724e41a370 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:45:00.453: INFO: Waiting for pod pod-secrets-2f3f473f-c4f2-41b7-9f98-6a724e41a370 to disappear Jan 11 14:45:00.455: INFO: Pod pod-secrets-2f3f473f-c4f2-41b7-9f98-6a724e41a370 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:00.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2336" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":216,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:00.506: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:00.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5965" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":15,"skipped":248,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:58.496: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-aebe2217-d1f8-4588-b594-a2b6bc8d6b2e �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:44:58.547: INFO: Waiting up to 5m0s for pod "pod-secrets-10a23655-65db-433c-a19c-0891886cf4e0" in namespace "secrets-6060" to be "Succeeded or Failed" Jan 11 14:44:58.552: INFO: Pod "pod-secrets-10a23655-65db-433c-a19c-0891886cf4e0": Phase="Pending", Reason="", readiness=false. Elapsed: 5.479999ms Jan 11 14:45:00.555: INFO: Pod "pod-secrets-10a23655-65db-433c-a19c-0891886cf4e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008466728s �[1mSTEP�[0m: Saw pod success Jan 11 14:45:00.555: INFO: Pod "pod-secrets-10a23655-65db-433c-a19c-0891886cf4e0" satisfied condition "Succeeded or Failed" Jan 11 14:45:00.560: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod pod-secrets-10a23655-65db-433c-a19c-0891886cf4e0 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:45:00.574: INFO: Waiting for pod pod-secrets-10a23655-65db-433c-a19c-0891886cf4e0 to disappear Jan 11 14:45:00.577: INFO: Pod pod-secrets-10a23655-65db-433c-a19c-0891886cf4e0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:00.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6060" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":370,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:00.594: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:45:01.129: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:45:04.148: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:04.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1433" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1433-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":13,"skipped":374,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:04.235: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:04.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-7749" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:00.580: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Pod that fits quota �[1mSTEP�[0m: Ensuring ResourceQuota status captures the pod usage �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) �[1mSTEP�[0m: Ensuring a pod cannot update its resource requirements �[1mSTEP�[0m: Ensuring attempts to update pod resource requirements did not change quota usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:13.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-676" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":16,"skipped":264,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:13.756: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:13.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5624" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":17,"skipped":320,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:13.821: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 11 14:45:17.886: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 14:45:17.889: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 14:45:19.889: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 14:45:19.893: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 14:45:21.889: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 14:45:21.893: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 14:45:23.889: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 14:45:23.893: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 14:45:25.889: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 14:45:25.893: INFO: Pod pod-with-poststart-exec-hook still exists Jan 11 14:45:27.889: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 11 14:45:27.893: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:27.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-8562" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":331,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:27.938: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 14:45:27.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-944 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 11 14:45:28.089: INFO: stderr: "" Jan 11 14:45:28.089: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Jan 11 14:45:28.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-944 delete pods e2e-test-httpd-pod' Jan 11 14:45:36.265: INFO: stderr: "" Jan 11 14:45:36.266: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:36.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-944" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":19,"skipped":359,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:36.310: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-417c6dfd-6684-4eba-a8a0-e775e722d30f �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-f2f2a44a-3501-4fdb-9a6b-fbd53c358b73 �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Jan 11 14:45:36.350: INFO: Waiting up to 5m0s for pod "projected-volume-0b0a9d29-4b3b-434d-93d0-39a2ad5ef7a3" in namespace "projected-8825" to be "Succeeded or Failed" Jan 11 14:45:36.352: INFO: Pod "projected-volume-0b0a9d29-4b3b-434d-93d0-39a2ad5ef7a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.533567ms Jan 11 14:45:38.356: INFO: Pod "projected-volume-0b0a9d29-4b3b-434d-93d0-39a2ad5ef7a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005873885s �[1mSTEP�[0m: Saw pod success Jan 11 14:45:38.356: INFO: Pod "projected-volume-0b0a9d29-4b3b-434d-93d0-39a2ad5ef7a3" satisfied condition "Succeeded or Failed" Jan 11 14:45:38.358: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod projected-volume-0b0a9d29-4b3b-434d-93d0-39a2ad5ef7a3 container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:45:38.374: INFO: Waiting for pod projected-volume-0b0a9d29-4b3b-434d-93d0-39a2ad5ef7a3 to disappear Jan 11 14:45:38.377: INFO: Pod projected-volume-0b0a9d29-4b3b-434d-93d0-39a2ad5ef7a3 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:38.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8825" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":365,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:38.465: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:45:39.154: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:45:42.177: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:42.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3046" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3046-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":21,"skipped":428,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:42.327: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 11 14:45:42.356: INFO: Waiting up to 5m0s for pod "pod-3597cb74-e784-42d4-84ef-9847d0be4224" in namespace "emptydir-9125" to be "Succeeded or Failed" Jan 11 14:45:42.358: INFO: Pod "pod-3597cb74-e784-42d4-84ef-9847d0be4224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.502415ms Jan 11 14:45:44.362: INFO: Pod "pod-3597cb74-e784-42d4-84ef-9847d0be4224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006012596s �[1mSTEP�[0m: Saw pod success Jan 11 14:45:44.362: INFO: Pod "pod-3597cb74-e784-42d4-84ef-9847d0be4224" satisfied condition "Succeeded or Failed" Jan 11 14:45:44.364: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-3597cb74-e784-42d4-84ef-9847d0be4224 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:45:44.381: INFO: Waiting for pod pod-3597cb74-e784-42d4-84ef-9847d0be4224 to disappear Jan 11 14:45:44.384: INFO: Pod pod-3597cb74-e784-42d4-84ef-9847d0be4224 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:44.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-9125" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":453,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:44.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Jan 11 14:45:44.472: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Jan 11 14:45:44.477: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 11 14:45:44.477: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Jan 11 14:45:44.483: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 11 14:45:44.483: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Jan 11 14:45:44.494: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Jan 11 14:45:44.494: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Jan 11 14:45:51.532: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:45:51.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-1706" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":23,"skipped":491,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:41:20.675: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: Gathering metrics W0111 14:41:26.763366 18 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 14:46:26.767: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:26.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3366" for this suite. �[32m• [SLOW TEST:306.099 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":3,"skipped":32,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:26.785: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-a30725c7-b4ba-4427-a354-d8b2f334a6e4 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 14:46:26.824: INFO: Waiting up to 5m0s for pod "pod-configmaps-dc595c77-dfac-4a25-a9ca-152d33c84e0f" in namespace "configmap-2435" to be "Succeeded or Failed" Jan 11 14:46:26.827: INFO: Pod "pod-configmaps-dc595c77-dfac-4a25-a9ca-152d33c84e0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.335809ms Jan 11 14:46:28.831: INFO: Pod "pod-configmaps-dc595c77-dfac-4a25-a9ca-152d33c84e0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006177242s �[1mSTEP�[0m: Saw pod success Jan 11 14:46:28.831: INFO: Pod "pod-configmaps-dc595c77-dfac-4a25-a9ca-152d33c84e0f" satisfied condition "Succeeded or Failed" Jan 11 14:46:28.833: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-configmaps-dc595c77-dfac-4a25-a9ca-152d33c84e0f container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:46:28.849: INFO: Waiting for pod pod-configmaps-dc595c77-dfac-4a25-a9ca-152d33c84e0f to disappear Jan 11 14:46:28.853: INFO: Pod pod-configmaps-dc595c77-dfac-4a25-a9ca-152d33c84e0f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:28.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2435" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":38,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:28.867: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test override all Jan 11 14:46:28.899: INFO: Waiting up to 5m0s for pod "client-containers-5a9d0ef4-10d7-48dc-aff0-3b128d99fcc2" in namespace "containers-4612" to be "Succeeded or Failed" Jan 11 14:46:28.902: INFO: Pod "client-containers-5a9d0ef4-10d7-48dc-aff0-3b128d99fcc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824203ms Jan 11 14:46:30.905: INFO: Pod "client-containers-5a9d0ef4-10d7-48dc-aff0-3b128d99fcc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006142187s �[1mSTEP�[0m: Saw pod success Jan 11 14:46:30.905: INFO: Pod "client-containers-5a9d0ef4-10d7-48dc-aff0-3b128d99fcc2" satisfied condition "Succeeded or Failed" Jan 11 14:46:30.909: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod client-containers-5a9d0ef4-10d7-48dc-aff0-3b128d99fcc2 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:46:30.929: INFO: Waiting for pod client-containers-5a9d0ef4-10d7-48dc-aff0-3b128d99fcc2 to disappear Jan 11 14:46:30.932: INFO: Pod client-containers-5a9d0ef4-10d7-48dc-aff0-3b128d99fcc2 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:30.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-4612" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":41,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:30.962: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Create set of events Jan 11 14:46:30.997: INFO: created test-event-1 Jan 11 14:46:31.000: INFO: created test-event-2 Jan 11 14:46:31.003: INFO: created test-event-3 �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete collection of events Jan 11 14:46:31.008: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity Jan 11 14:46:31.022: INFO: requesting list of events to confirm quantity [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:31.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-558" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Events should delete a collection of events [Conformance]","total":-1,"completed":6,"skipped":56,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:31.056: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 11 14:46:31.111: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 11 14:46:31.114: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 11 14:46:31.126: INFO: waiting for watch events with expected annotations Jan 11 14:46:31.126: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:31.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-8601" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":7,"skipped":71,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:31.185: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:46:31.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3645 version' Jan 11 14:46:31.306: INFO: stderr: "" Jan 11 14:46:31.306: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.15\", GitCommit:\"8f1e5bf0b9729a899b8df86249b56e2c74aebc55\", GitTreeState:\"clean\", BuildDate:\"2022-01-19T17:27:39Z\", GoVersion:\"go1.15.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"20\", GitVersion:\"v1.20.15\", GitCommit:\"8f1e5bf0b9729a899b8df86249b56e2c74aebc55\", GitTreeState:\"clean\", BuildDate:\"2022-10-26T15:31:34Z\", GoVersion:\"go1.15.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:31.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3645" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":8,"skipped":79,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:51.571: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:45:51.598: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating first CR Jan 11 14:45:52.153: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T14:45:52Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T14:45:52Z]] name:name1 resourceVersion:6061 uid:32e2f495-9403-46e4-8d0d-43fdda459cd0] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Creating second CR Jan 11 14:46:02.159: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T14:46:02Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T14:46:02Z]] name:name2 resourceVersion:6140 uid:80218ec6-2b34-4c74-a5bf-88eb9baa4d9c] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying first CR Jan 11 14:46:12.165: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T14:45:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T14:46:12Z]] name:name1 resourceVersion:6165 uid:32e2f495-9403-46e4-8d0d-43fdda459cd0] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying second CR Jan 11 14:46:22.173: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T14:46:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T14:46:22Z]] name:name2 resourceVersion:6185 uid:80218ec6-2b34-4c74-a5bf-88eb9baa4d9c] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting first CR Jan 11 14:46:32.183: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T14:45:52Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T14:46:12Z]] name:name1 resourceVersion:6373 uid:32e2f495-9403-46e4-8d0d-43fdda459cd0] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting second CR Jan 11 14:46:42.190: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-11T14:46:02Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-11T14:46:22Z]] name:name2 resourceVersion:6440 uid:80218ec6-2b34-4c74-a5bf-88eb9baa4d9c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:52.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-watch-1608" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":24,"skipped":505,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:52.712: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Jan 11 14:46:52.754: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:52.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-1782" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":25,"skipped":508,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:52.786: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:46:52.817: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f3c31758-e526-42d8-b975-d4bd27ffc561" in namespace "security-context-test-6197" to be "Succeeded or Failed" Jan 11 14:46:52.819: INFO: Pod "busybox-privileged-false-f3c31758-e526-42d8-b975-d4bd27ffc561": Phase="Pending", Reason="", readiness=false. Elapsed: 1.935202ms Jan 11 14:46:54.822: INFO: Pod "busybox-privileged-false-f3c31758-e526-42d8-b975-d4bd27ffc561": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005402202s Jan 11 14:46:56.826: INFO: Pod "busybox-privileged-false-f3c31758-e526-42d8-b975-d4bd27ffc561": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009247926s Jan 11 14:46:56.826: INFO: Pod "busybox-privileged-false-f3c31758-e526-42d8-b975-d4bd27ffc561" satisfied condition "Succeeded or Failed" Jan 11 14:46:56.831: INFO: Got logs for pod "busybox-privileged-false-f3c31758-e526-42d8-b975-d4bd27ffc561": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:46:56.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-6197" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":512,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:56.859: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:46:56.889: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 11 14:47:01.892: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 11 14:47:01.892: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 11 14:47:03.896: INFO: Creating deployment "test-rollover-deployment" Jan 11 14:47:03.903: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 11 14:47:05.909: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 11 14:47:05.915: INFO: Ensure that both replica sets have 1 created replica Jan 11 14:47:05.919: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 11 14:47:05.925: INFO: Updating deployment test-rollover-deployment Jan 11 14:47:05.925: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 11 14:47:07.931: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 11 14:47:07.936: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 11 14:47:07.941: INFO: all replica sets need to contain the pod-template-hash label Jan 11 14:47:07.941: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045227, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:47:09.948: INFO: all replica sets need to contain the pod-template-hash label Jan 11 14:47:09.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045227, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:47:11.947: INFO: all replica sets need to contain the pod-template-hash label Jan 11 14:47:11.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045227, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:47:13.948: INFO: all replica sets need to contain the pod-template-hash label Jan 11 14:47:13.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045227, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:47:15.947: INFO: all replica sets need to contain the pod-template-hash label Jan 11 14:47:15.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045227, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045223, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668db69979\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:47:17.957: INFO: Jan 11 14:47:17.957: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 14:47:17.971: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-3519 340baaed-503c-48a0-8f9b-6d09b1f98281 6641 2 2023-01-11 14:47:03 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-11 14:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-11 14:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00344b718 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-11 14:47:03 +0000 UTC,LastTransitionTime:2023-01-11 14:47:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668db69979" has successfully progressed.,LastUpdateTime:2023-01-11 14:47:17 +0000 UTC,LastTransitionTime:2023-01-11 14:47:03 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 14:47:17.979: INFO: New ReplicaSet "test-rollover-deployment-668db69979" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668db69979 deployment-3519 2ab25116-711a-4dfe-86e7-5f66b38842b7 6630 2 2023-01-11 14:47:05 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 340baaed-503c-48a0-8f9b-6d09b1f98281 0xc00344bb57 0xc00344bb58}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"340baaed-503c-48a0-8f9b-6d09b1f98281\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668db69979,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00344bbe8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:47:17.979: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 11 14:47:17.979: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-3519 0284b7c1-5d65-4458-ba72-3a096175ec93 6640 2 2023-01-11 14:46:56 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 340baaed-503c-48a0-8f9b-6d09b1f98281 0xc00344ba4f 0xc00344ba60}] [] [{e2e.test Update apps/v1 2023-01-11 14:46:56 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-11 14:47:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"340baaed-503c-48a0-8f9b-6d09b1f98281\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00344baf8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:47:17.980: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-78bc8b888c deployment-3519 aa7d7b3b-d3d6-400e-ad39-eca7b76db9f8 6592 2 2023-01-11 14:47:03 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 340baaed-503c-48a0-8f9b-6d09b1f98281 0xc00344bc47 0xc00344bc48}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"340baaed-503c-48a0-8f9b-6d09b1f98281\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 78bc8b888c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:78bc8b888c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00344bcd8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:47:17.987: INFO: Pod "test-rollover-deployment-668db69979-kqqxk" is available: &Pod{ObjectMeta:{test-rollover-deployment-668db69979-kqqxk test-rollover-deployment-668db69979- deployment-3519 58d099b9-55d2-45ba-971d-7af7585c184b 6609 0 2023-01-11 14:47:05 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668db69979] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668db69979 2ab25116-711a-4dfe-86e7-5f66b38842b7 0xc002b57147 0xc002b57148}] [] [{kube-controller-manager Update v1 2023-01-11 14:47:05 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2ab25116-711a-4dfe-86e7-5f66b38842b7\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:47:07 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.33\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s6pml,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s6pml,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s6pml,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-cvzb96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:47:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:47:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:47:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:47:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.33,StartTime:2023-01-11 14:47:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:47:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://348bb8552c25558b7eef45278d82d7a6959548a215d58ec460c03cb362759c97,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:47:17.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3519" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":27,"skipped":528,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:46:31.325: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:46:31.702: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:46:34.719: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Jan 11 14:46:44.737: INFO: Waiting for webhook configuration to be ready... Jan 11 14:46:54.847: INFO: Waiting for webhook configuration to be ready... Jan 11 14:47:04.949: INFO: Waiting for webhook configuration to be ready... Jan 11 14:47:15.050: INFO: Waiting for webhook configuration to be ready... Jan 11 14:47:25.071: INFO: Waiting for webhook configuration to be ready... Jan 11 14:47:25.072: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhook(0xc000f62840, 0xc0033207e0, 0xc, 0xc002b90d70, 0x20fb, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:908 +0xd4a k8s.io/kubernetes/test/e2e/apimachinery.glob..func23.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:194 +0x69 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0023dfe00, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:47:25.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8013" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8013-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[91m�[1m• Failure [53.871 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny pod and configmap creation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 11 14:47:25.072: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:908 �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:47:18.052: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod liveness-d22614c0-5f3a-4a66-b6f5-5aaee460ed36 in namespace container-probe-2903 Jan 11 14:47:20.132: INFO: Started pod liveness-d22614c0-5f3a-4a66-b6f5-5aaee460ed36 in namespace container-probe-2903 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 11 14:47:20.138: INFO: Initial restart count of pod liveness-d22614c0-5f3a-4a66-b6f5-5aaee460ed36 is 0 Jan 11 14:47:44.228: INFO: Restart count of pod container-probe-2903/liveness-d22614c0-5f3a-4a66-b6f5-5aaee460ed36 is now 1 (24.089756546s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:47:44.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-2903" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":548,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:47:44.334: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating the pod Jan 11 14:47:46.946: INFO: Successfully updated pod "annotationupdateb65ab138-2d9c-405f-acf0-e206ad1ebf37" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:47:51.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3132" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":574,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:47:51.145: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:47:51.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8c0cf94-a3a1-4cdd-9c70-4188b27c5a0a" in namespace "projected-6507" to be "Succeeded or Failed" Jan 11 14:47:51.226: INFO: Pod "downwardapi-volume-f8c0cf94-a3a1-4cdd-9c70-4188b27c5a0a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.037112ms Jan 11 14:47:53.232: INFO: Pod "downwardapi-volume-f8c0cf94-a3a1-4cdd-9c70-4188b27c5a0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010651722s �[1mSTEP�[0m: Saw pod success Jan 11 14:47:53.232: INFO: Pod "downwardapi-volume-f8c0cf94-a3a1-4cdd-9c70-4188b27c5a0a" satisfied condition "Succeeded or Failed" Jan 11 14:47:53.236: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod downwardapi-volume-f8c0cf94-a3a1-4cdd-9c70-4188b27c5a0a container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:47:53.260: INFO: Waiting for pod downwardapi-volume-f8c0cf94-a3a1-4cdd-9c70-4188b27c5a0a to disappear Jan 11 14:47:53.265: INFO: Pod downwardapi-volume-f8c0cf94-a3a1-4cdd-9c70-4188b27c5a0a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:47:53.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6507" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":609,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:47:53.337: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:47:53.386: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:48:26.409: FAIL: failed to wait for definition "com.example.crd-publish-openapi-test-unknown-at-root.v1.E2e-test-crd-publish-openapi-6110-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: Get "https://172.18.0.3:6443/openapi/v2?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers); lastMsg: Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0011ab800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0011ab800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0011ab800, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:48:26.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 11 14:48:38.908: FAIL: All nodes should be ready after test, Get "https://172.18.0.3:6443/api/v1/nodes": http2: client connection lost Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0011ab800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0011ab800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0011ab800, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-8251" for this suite. Jan 11 14:48:38.910: FAIL: Couldn't delete ns: "crd-publish-openapi-8251": Delete "https://172.18.0.3:6443/api/v1/namespaces/crd-publish-openapi-8251": EOF (&url.Error{Op:"Delete", URL:"https://172.18.0.3:6443/api/v1/namespaces/crd-publish-openapi-8251", Err:(*errors.errorString)(0xc000118030)}) Full Stack Trace panic(0x499f1e0, 0xc0027f4980) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00007ec60, 0x84, 0x77aef5e, 0x87, 0x71, 0xc0014be1c0, 0x1a1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00007ec60, 0x84, 0xc001fe8c88, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0x4e804aa, 0x28, 0xc001fe8dc8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000ba8f20) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:472 +0x4de k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0011ab800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0011ab800) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0011ab800, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[91m�[1m• Failure [45.574 seconds]�[0m [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mworks for CRD preserving unknown fields at the schema root [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 11 14:48:26.409: failed to wait for definition "com.example.crd-publish-openapi-test-unknown-at-root.v1.E2e-test-crd-publish-openapi-6110-crd" to be served with the right OpenAPI schema: failed to wait for OpenAPI spec validating condition: Get "https://172.18.0.3:6443/openapi/v2?timeout=32s": net/http: request canceled (Client.Timeout exceeded while awaiting headers); lastMsg: �[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":8,"skipped":83,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:47:25.198: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:47:25.974: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:47:29.010: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Jan 11 14:47:39.044: INFO: Waiting for webhook configuration to be ready... Jan 11 14:47:49.162: INFO: Waiting for webhook configuration to be ready... Jan 11 14:47:59.266: INFO: Waiting for webhook configuration to be ready... Jan 11 14:48:44.349: FAIL: waiting for webhook configuration to be ready Unexpected error: <*url.Error | 0xc003920000>: { Op: "Post", URL: "https://172.18.0.3:6443/api/v1/namespaces/webhook-9222-markers/configmaps", Err: { s: "http2: client connection lost", }, } Post "https://172.18.0.3:6443/api/v1/namespaces/webhook-9222-markers/configmaps": http2: client connection lost occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhook(0xc000f62840, 0xc002bc0780, 0xc, 0xc00332b180, 0x20fb, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:908 +0xd4a k8s.io/kubernetes/test/e2e/apimachinery.glob..func23.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:194 +0x69 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0023dfe00, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:48:44.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready Jan 11 14:48:54.369: FAIL: All nodes should be ready after test, an error on the server ("") has prevented the request from succeeding (get nodes) Full Stack Trace k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0023dfe00, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: Destroying namespace "webhook-9222" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9222-markers" for this suite. Jan 11 14:48:54.374: FAIL: Couldn't delete ns: "webhook-9222": Delete "https://172.18.0.3:6443/api/v1/namespaces/webhook-9222": EOF (&url.Error{Op:"Delete", URL:"https://172.18.0.3:6443/api/v1/namespaces/webhook-9222", Err:(*errors.errorString)(0xc00007e040)}),Couldn't delete ns: "webhook-9222-markers": Delete "https://172.18.0.3:6443/api/v1/namespaces/webhook-9222-markers": EOF (&url.Error{Op:"Delete", URL:"https://172.18.0.3:6443/api/v1/namespaces/webhook-9222-markers", Err:(*errors.errorString)(0xc00007e040)}) Full Stack Trace panic(0x499f1e0, 0xc0033b0500) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc000126090, 0x8c, 0x77aef5e, 0x87, 0x71, 0xc00304c1c0, 0x1a1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc000126090, 0x8c, 0xc0030d6c88, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0x4e804aa, 0x28, 0xc0030d6dc8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/framework.(*Framework).AfterEach(0xc000f62840) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:472 +0x4de k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0023dfe00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0023dfe00, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[91m�[1m• Failure [89.182 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny pod and configmap creation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 11 14:48:44.349: waiting for webhook configuration to be ready Unexpected error: <*url.Error | 0xc003920000>: { Op: "Post", URL: "https://172.18.0.3:6443/api/v1/namespaces/webhook-9222-markers/configmaps", Err: { s: "http2: client connection lost", }, } Post "https://172.18.0.3:6443/api/v1/namespaces/webhook-9222-markers/configmaps": http2: client connection lost occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:908 �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:44:43.058: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-5266 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-5266 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-5266 I0111 14:44:43.108701 19 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5266, replica count: 3 I0111 14:44:46.159380 19 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:44:46.168: INFO: Creating new exec pod Jan 11 14:44:49.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 11 14:44:49.381: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 11 14:44:49.381: INFO: stdout: "" Jan 11 14:44:49.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c nc -zv -t -w 2 10.130.25.194 80' Jan 11 14:44:49.588: INFO: stderr: "+ nc -zv -t -w 2 10.130.25.194 80\nConnection to 10.130.25.194 80 port [tcp/http] succeeded!\n" Jan 11 14:44:49.588: INFO: stdout: "" Jan 11 14:44:49.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31292' Jan 11 14:44:49.802: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 31292\nConnection to 172.18.0.5 31292 port [tcp/31292] succeeded!\n" Jan 11 14:44:49.802: INFO: stdout: "" Jan 11 14:44:49.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31292' Jan 11 14:44:49.998: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.6 31292\nConnection to 172.18.0.6 31292 port [tcp/31292] succeeded!\n" Jan 11 14:44:49.998: INFO: stdout: "" Jan 11 14:44:50.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31292/ ; done' Jan 11 14:45:40.224: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n" Jan 11 14:45:40.224: INFO: stdout: "\naffinity-nodeport-transition-kz9lb\n" Jan 11 14:45:40.224: INFO: Received response from host: affinity-nodeport-transition-kz9lb Jan 11 14:46:10.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31292/ ; done' Jan 11 14:47:00.431: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n" Jan 11 14:47:00.431: INFO: stdout: "\naffinity-nodeport-transition-kz9lb\n" Jan 11 14:47:00.431: INFO: Received response from host: affinity-nodeport-transition-kz9lb Jan 11 14:47:10.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31292/ ; done' Jan 11 14:48:00.420: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n" Jan 11 14:48:00.420: INFO: stdout: "\naffinity-nodeport-transition-kz9lb\naffinity-nodeport-transition-kz9lb\naffinity-nodeport-transition-kz9lb\n" Jan 11 14:48:00.420: INFO: Received response from host: affinity-nodeport-transition-kz9lb Jan 11 14:48:00.420: INFO: Received response from host: affinity-nodeport-transition-kz9lb Jan 11 14:48:00.420: INFO: Received response from host: affinity-nodeport-transition-kz9lb Jan 11 14:48:00.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5266 exec execpod-affinityk76sj -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31292/ ; done' Jan 11 14:48:50.775: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31292/\n" Jan 11 14:48:50.775: INFO: stdout: "\n" Jan 11 14:48:50.775: INFO: [affinity-nodeport-transition-kz9lb affinity-nodeport-transition-kz9lb affinity-nodeport-transition-kz9lb affinity-nodeport-transition-kz9lb affinity-nodeport-transition-kz9lb] Jan 11 14:48:50.775: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc0034d4c60, 0xc003945800, 0xc004a630a0, 0xa, 0x7a3c, 0x0, 0xc003945800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001087340, 0x56112e0, 0xc0034d4c60, 0xc00352b400, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3399 k8s.io/kubernetes/test/e2e/network.glob..func24.30() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2485 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003202300, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 11 14:48:50.776: INFO: Cleaning up the exec pod Jan 11 14:48:50.779: FAIL: failed to delete pod: execpod-affinityk76sj in namespace: services-5266 Unexpected error: <*url.Error | 0xc000d2a420>: { Op: "Delete", URL: "https://172.18.0.3:6443/api/v1/namespaces/services-5266/pods/execpod-affinityk76sj", Err: {s: "EOF"}, } Delete "https://172.18.0.3:6443/api/v1/namespaces/services-5266/pods/execpod-affinityk76sj": EOF occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition.func2(0x56112e0, 0xc0034d4c60, 0xc003e4db70, 0xd, 0xc003945800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3441 +0x249 panic(0x499f1e0, 0xc0020dc100) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc00085c190, 0x42, 0x77361cf, 0x68, 0xca, 0xc003b1c580, 0x514) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc00085c190, 0x42, 0xc0039bee88, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0xc0038d0060, 0x2d, 0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/network.checkAffinityFailed(0xc00183a000, 0x5, 0x8, 0xc0038d0060, 0x2d) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:246 +0xd6 k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc0034d4c60, 0xc003945800, 0xc004a630a0, 0xa, 0x7a3c, 0x0, 0xc003945800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001087340, 0x56112e0, 0xc0034d4c60, 0xc00352b400, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3399 k8s.io/kubernetes/test/e2e/network.glob..func24.30() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2485 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003202300, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-5266, will wait for the garbage collector to delete the pods [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:00.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5266" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [259.772 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 11 14:48:50.775: Connection timed out or not enough responses.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":8,"skipped":83,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:48:54.383: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook Jan 11 14:48:54.387: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:56.388: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:58.388: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:49:00.390: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 11 14:49:28.609: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:49:28.656: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 11 14:49:30.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045368, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045368, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045368, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045368, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:49:33.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be denied by the webhook �[1mSTEP�[0m: create a pod that causes the webhook to hang �[1mSTEP�[0m: create a configmap that should be denied by the webhook �[1mSTEP�[0m: create a configmap that should be admitted by the webhook �[1mSTEP�[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: create a namespace that bypass the webhook �[1mSTEP�[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:44.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2514" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2514-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":9,"skipped":83,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":30,"skipped":637,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:48:38.914: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi Jan 11 14:48:38.919: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:40.920: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:42.920: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:44.921: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:46.920: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:48.920: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:50.921: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:52.920: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:54.921: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:56.920: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:48:58.921: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF Jan 11 14:49:00.921: INFO: Unexpected error while creating namespace: Post "https://172.18.0.3:6443/api/v1/namespaces": EOF �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:49:27.666: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 14:49:33.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-909 --namespace=crd-publish-openapi-909 create -f -' Jan 11 14:49:35.455: INFO: stderr: "" Jan 11 14:49:35.455: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 14:49:35.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-909 --namespace=crd-publish-openapi-909 delete e2e-test-crd-publish-openapi-7476-crds test-cr' Jan 11 14:49:35.675: INFO: stderr: "" Jan 11 14:49:35.675: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 11 14:49:35.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-909 --namespace=crd-publish-openapi-909 apply -f -' Jan 11 14:49:38.055: INFO: stderr: "" Jan 11 14:49:38.055: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 11 14:49:38.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-909 --namespace=crd-publish-openapi-909 delete e2e-test-crd-publish-openapi-7476-crds test-cr' Jan 11 14:49:38.277: INFO: stderr: "" Jan 11 14:49:38.278: INFO: stdout: "e2e-test-crd-publish-openapi-7476-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 11 14:49:38.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-909 explain e2e-test-crd-publish-openapi-7476-crds' Jan 11 14:49:38.796: INFO: stderr: "" Jan 11 14:49:38.796: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7476-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:44.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-909" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":31,"skipped":637,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:44.402: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-ae86e328-282e-4be1-8975-ca96cc17f2bb �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 14:49:44.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-ae25efd3-3603-42bc-aabc-d686d12d99ad" in namespace "configmap-2752" to be "Succeeded or Failed" Jan 11 14:49:44.494: INFO: Pod "pod-configmaps-ae25efd3-3603-42bc-aabc-d686d12d99ad": Phase="Pending", Reason="", readiness=false. Elapsed: 7.891643ms Jan 11 14:49:46.501: INFO: Pod "pod-configmaps-ae25efd3-3603-42bc-aabc-d686d12d99ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014184767s �[1mSTEP�[0m: Saw pod success Jan 11 14:49:46.501: INFO: Pod "pod-configmaps-ae25efd3-3603-42bc-aabc-d686d12d99ad" satisfied condition "Succeeded or Failed" Jan 11 14:49:46.507: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-configmaps-ae25efd3-3603-42bc-aabc-d686d12d99ad container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:49:46.541: INFO: Waiting for pod pod-configmaps-ae25efd3-3603-42bc-aabc-d686d12d99ad to disappear Jan 11 14:49:46.544: INFO: Pod pod-configmaps-ae25efd3-3603-42bc-aabc-d686d12d99ad no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:46.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2752" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":640,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:46.742: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: validating api versions Jan 11 14:49:46.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8247 api-versions' Jan 11 14:49:46.989: INFO: stderr: "" Jan 11 14:49:46.989: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ncrd-publish-openapi-test-unknown-at-root.example.com/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:46.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8247" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":33,"skipped":719,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:47.063: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-f3b36195-daae-43ad-86e5-dedd94d7173e �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:49:47.122: INFO: Waiting up to 5m0s for pod "pod-secrets-f34e975f-5765-4f1e-8c18-887deb15543b" in namespace "secrets-8349" to be "Succeeded or Failed" Jan 11 14:49:47.130: INFO: Pod "pod-secrets-f34e975f-5765-4f1e-8c18-887deb15543b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.996629ms Jan 11 14:49:49.138: INFO: Pod "pod-secrets-f34e975f-5765-4f1e-8c18-887deb15543b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015002881s �[1mSTEP�[0m: Saw pod success Jan 11 14:49:49.138: INFO: Pod "pod-secrets-f34e975f-5765-4f1e-8c18-887deb15543b" satisfied condition "Succeeded or Failed" Jan 11 14:49:49.142: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod pod-secrets-f34e975f-5765-4f1e-8c18-887deb15543b container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:49:49.185: INFO: Waiting for pod pod-secrets-f34e975f-5765-4f1e-8c18-887deb15543b to disappear Jan 11 14:49:49.194: INFO: Pod pod-secrets-f34e975f-5765-4f1e-8c18-887deb15543b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:49.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8349" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":742,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:49.248: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:53.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-8703" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":748,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:44.349: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status captures replicaset creation �[1mSTEP�[0m: Deleting a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:55.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-6995" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":10,"skipped":117,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:55.519: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing all events in all namespaces �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: fetching the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:55.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-1579" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":11,"skipped":121,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:55.633: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 11 14:49:55.713: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 11 14:49:55.728: INFO: waiting for watch events with expected annotations Jan 11 14:49:55.728: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:49:55.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-6523" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":12,"skipped":121,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:55.798: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating server pod server in namespace prestop-5571 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-5571 �[1mSTEP�[0m: Deleting pre-stop pod Jan 11 14:50:04.893: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:04.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-5571" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":13,"skipped":130,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:04.925: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 11 14:50:04.985: INFO: Waiting up to 5m0s for pod "downward-api-cd5f2827-d600-4b71-87fd-6185de7f3e09" in namespace "downward-api-1962" to be "Succeeded or Failed" Jan 11 14:50:04.989: INFO: Pod "downward-api-cd5f2827-d600-4b71-87fd-6185de7f3e09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291819ms Jan 11 14:50:06.995: INFO: Pod "downward-api-cd5f2827-d600-4b71-87fd-6185de7f3e09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009820779s �[1mSTEP�[0m: Saw pod success Jan 11 14:50:06.995: INFO: Pod "downward-api-cd5f2827-d600-4b71-87fd-6185de7f3e09" satisfied condition "Succeeded or Failed" Jan 11 14:50:07.000: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod downward-api-cd5f2827-d600-4b71-87fd-6185de7f3e09 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:50:07.021: INFO: Waiting for pod downward-api-cd5f2827-d600-4b71-87fd-6185de7f3e09 to disappear Jan 11 14:50:07.028: INFO: Pod downward-api-cd5f2827-d600-4b71-87fd-6185de7f3e09 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:07.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1962" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":130,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:07.079: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:09.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-7805" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":146,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:09.173: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating all guestbook components Jan 11 14:50:09.213: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 11 14:50:09.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 create -f -' Jan 11 14:50:11.054: INFO: stderr: "" Jan 11 14:50:11.054: INFO: stdout: "service/agnhost-replica created\n" Jan 11 14:50:11.054: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 11 14:50:11.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 create -f -' Jan 11 14:50:11.677: INFO: stderr: "" Jan 11 14:50:11.678: INFO: stdout: "service/agnhost-primary created\n" Jan 11 14:50:11.678: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 11 14:50:11.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 create -f -' Jan 11 14:50:12.197: INFO: stderr: "" Jan 11 14:50:12.197: INFO: stdout: "service/frontend created\n" Jan 11 14:50:12.198: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 11 14:50:12.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 create -f -' Jan 11 14:50:12.782: INFO: stderr: "" Jan 11 14:50:12.782: INFO: stdout: "deployment.apps/frontend created\n" Jan 11 14:50:12.782: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 14:50:12.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 create -f -' Jan 11 14:50:13.934: INFO: stderr: "" Jan 11 14:50:13.935: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 11 14:50:13.935: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 11 14:50:13.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 create -f -' Jan 11 14:50:15.773: INFO: stderr: "" Jan 11 14:50:15.773: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 11 14:50:15.773: INFO: Waiting for all frontend pods to be Running. Jan 11 14:50:15.823: INFO: Waiting for frontend to serve content. Jan 11 14:50:16.882: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 11 14:50:21.894: INFO: Trying to add a new entry to the guestbook. Jan 11 14:50:21.917: INFO: Verifying that added entry can be retrieved. Jan 11 14:50:21.930: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""} �[1mSTEP�[0m: using delete to clean up resources Jan 11 14:50:26.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 delete --grace-period=0 --force -f -' Jan 11 14:50:27.152: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 14:50:27.152: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 14:50:27.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 delete --grace-period=0 --force -f -' Jan 11 14:50:27.400: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 14:50:27.400: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 14:50:27.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 delete --grace-period=0 --force -f -' Jan 11 14:50:27.611: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 14:50:27.611: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 14:50:27.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 delete --grace-period=0 --force -f -' Jan 11 14:50:27.794: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 14:50:27.794: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 14:50:27.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 delete --grace-period=0 --force -f -' Jan 11 14:50:28.061: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 14:50:28.062: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 11 14:50:28.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-685 delete --grace-period=0 --force -f -' Jan 11 14:50:28.358: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 14:50:28.358: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:28.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-685" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":16,"skipped":149,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:28.387: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod Jan 11 14:50:28.487: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:34.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-8304" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":17,"skipped":152,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:34.222: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:50:34.270: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:35.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-1686" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":18,"skipped":153,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:35.634: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:50:35.691: INFO: Creating deployment "webserver-deployment" Jan 11 14:50:35.695: INFO: Waiting for observed generation 1 Jan 11 14:50:37.731: INFO: Waiting for all required pods to come up Jan 11 14:50:37.762: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 11 14:50:39.793: INFO: Waiting for deployment "webserver-deployment" to complete Jan 11 14:50:39.802: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 11 14:50:39.816: INFO: Updating deployment webserver-deployment Jan 11 14:50:39.816: INFO: Waiting for observed generation 2 Jan 11 14:50:41.825: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 11 14:50:41.828: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 11 14:50:41.833: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 14:50:41.845: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 11 14:50:41.845: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 11 14:50:41.848: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 11 14:50:41.856: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 11 14:50:41.856: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 11 14:50:41.866: INFO: Updating deployment webserver-deployment Jan 11 14:50:41.866: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 11 14:50:41.874: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 11 14:50:41.881: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 14:50:41.903: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6204 4e1db50c-4941-4b04-bfe6-44cd9d2abbd8 8071 3 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-11 14:50:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc000116918 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-795d758f88" is progressing.,LastUpdateTime:2023-01-11 14:50:39 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-11 14:50:41 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 11 14:50:41.939: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88 deployment-6204 13627682-869d-4bc3-bff5-7198c29d2ca1 8065 3 2023-01-11 14:50:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 4e1db50c-4941-4b04-bfe6-44cd9d2abbd8 0xc00222d157 0xc00222d158}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:50:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1db50c-4941-4b04-bfe6-44cd9d2abbd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00222d1d8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:50:41.939: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 11 14:50:41.939: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-dd94f59b7 deployment-6204 00ea8d15-6448-469a-a086-86d1001aabf0 8062 3 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 4e1db50c-4941-4b04-bfe6-44cd9d2abbd8 0xc00222d237 0xc00222d238}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:50:37 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4e1db50c-4941-4b04-bfe6-44cd9d2abbd8\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: dd94f59b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00222d2a8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:50:41.962: INFO: Pod "webserver-deployment-795d758f88-2rd52" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-2rd52 webserver-deployment-795d758f88- deployment-6204 930e979f-2920-485c-bfb6-821af4e0a937 8090 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc003acd480 0xc003acd481}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.963: INFO: Pod "webserver-deployment-795d758f88-9jzcj" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-9jzcj webserver-deployment-795d758f88- deployment-6204 c0ed3e88-4367-46ba-b96c-f8d5674d5bc8 8091 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc003acd727 0xc003acd728}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.964: INFO: Pod "webserver-deployment-795d758f88-c882s" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-c882s webserver-deployment-795d758f88- deployment-6204 b45ca57f-1c8a-46c1-af7a-5fb6b074721e 8049 0 2023-01-11 14:50:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc003acd930 0xc003acd931}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.29\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.29,StartTime:2023-01-11 14:50:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.964: INFO: Pod "webserver-deployment-795d758f88-htm6g" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-htm6g webserver-deployment-795d758f88- deployment-6204 6d9d4f98-e00e-43d4-9147-e6ee96a140c9 8092 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc003acdc70 0xc003acdc71}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-cvzb96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.964: INFO: Pod "webserver-deployment-795d758f88-j45k4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-j45k4 webserver-deployment-795d758f88- deployment-6204 8022b40d-a5c3-4229-9a54-5808acd6bccf 8073 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc003acdf80 0xc003acdf81}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.964: INFO: Pod "webserver-deployment-795d758f88-jkm6x" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-jkm6x webserver-deployment-795d758f88- deployment-6204 fc16536b-27b6-4442-a8d0-1dfdee997e3c 8087 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc0005144f0 0xc0005144f1}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.966: INFO: Pod "webserver-deployment-795d758f88-kb7vk" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-kb7vk webserver-deployment-795d758f88- deployment-6204 df035efa-9a13-47e7-80c1-c79d6bd2e50d 8060 0 2023-01-11 14:50:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc000514a67 0xc000514a68}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.22\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.22,StartTime:2023-01-11 14:50:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.22,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.966: INFO: Pod "webserver-deployment-795d758f88-m68j6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-m68j6 webserver-deployment-795d758f88- deployment-6204 9683a2d7-cb5b-45b8-9d5d-462790c941c8 8086 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc000515670 0xc000515671}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.967: INFO: Pod "webserver-deployment-795d758f88-nkwx4" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-nkwx4 webserver-deployment-795d758f88- deployment-6204 3608ed2b-cd46-437f-8ee2-a5ab9d175c1a 8054 0 2023-01-11 14:50:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc000054ec7 0xc000054ec8}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-cvzb96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.41,StartTime:2023-01-11 14:50:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.968: INFO: Pod "webserver-deployment-795d758f88-p5lt6" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-p5lt6 webserver-deployment-795d758f88- deployment-6204 3fbe10f8-d332-443f-b268-03f401f6b172 8021 0 2023-01-11 14:50:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc000055d90 0xc000055d91}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-11 14:50:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.969: INFO: Pod "webserver-deployment-795d758f88-v26wp" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-v26wp webserver-deployment-795d758f88- deployment-6204 333f3a92-50f0-45f7-8392-16a367644491 8052 0 2023-01-11 14:50:39 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc000b2a050 0xc000b2a051}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:39 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.44,StartTime:2023-01-11 14:50:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.969: INFO: Pod "webserver-deployment-795d758f88-x4n24" is not available: &Pod{ObjectMeta:{webserver-deployment-795d758f88-x4n24 webserver-deployment-795d758f88- deployment-6204 f4fad7f1-5c87-4623-bc8a-4828cb95c769 8088 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 13627682-869d-4bc3-bff5-7198c29d2ca1 0xc000b2a310 0xc000b2a311}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"13627682-869d-4bc3-bff5-7198c29d2ca1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.970: INFO: Pod "webserver-deployment-dd94f59b7-79s5l" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-79s5l webserver-deployment-dd94f59b7- deployment-6204 5cc73097-b3aa-4309-a9ea-58e45cb99e7a 8089 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000b2ad77 0xc000b2ad78}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-11 14:50:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.970: INFO: Pod "webserver-deployment-dd94f59b7-dqj2z" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-dqj2z webserver-deployment-dd94f59b7- deployment-6204 38ce044f-f722-4740-8547-8de20e33647a 8084 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000b2bef0 0xc000b2bef1}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.970: INFO: Pod "webserver-deployment-dd94f59b7-g942s" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-g942s webserver-deployment-dd94f59b7- deployment-6204 d9f8c826-8614-4ef1-9e87-6a8db6a9e717 8079 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aac200 0xc000aac201}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.971: INFO: Pod "webserver-deployment-dd94f59b7-hmdv5" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-hmdv5 webserver-deployment-dd94f59b7- deployment-6204 35747ad6-1c0b-478d-82cd-282cf86f2235 8081 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aac507 0xc000aac508}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.971: INFO: Pod "webserver-deployment-dd94f59b7-qdflp" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-qdflp webserver-deployment-dd94f59b7- deployment-6204 98d78a47-ef86-4098-bf24-3fbca2bfcea6 7915 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aac7f7 0xc000aac7f8}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:37 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.20\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.20,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2a57a8bf4a5dbcc8c4e483b5895e327d42b77b55c3b9d3865efd8c3dde572365,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.20,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.971: INFO: Pod "webserver-deployment-dd94f59b7-rxfqh" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-rxfqh webserver-deployment-dd94f59b7- deployment-6204 e0151caf-2f7c-4793-9c12-edf06093f0f1 7939 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aacbc0 0xc000aacbc1}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-cvzb96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.39,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d92e5bd6e4754b6ce63c5774cdcb5f5dcc6e955661aac6cdc4d9834a646ff841,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.972: INFO: Pod "webserver-deployment-dd94f59b7-s68qf" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-s68qf webserver-deployment-dd94f59b7- deployment-6204 0d11af95-3b2c-45f8-872c-2c965cd09bab 8083 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aacf20 0xc000aacf21}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.972: INFO: Pod "webserver-deployment-dd94f59b7-sghmm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-sghmm webserver-deployment-dd94f59b7- deployment-6204 57b21209-bb74-4444-8526-c858f8d6df07 7942 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aad0f7 0xc000aad0f8}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-cvzb96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.40,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bc49b1ef55b01bc1776f216266d81b0e1125ee68a277148afcdca2363a115a46,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.972: INFO: Pod "webserver-deployment-dd94f59b7-vnsqb" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vnsqb webserver-deployment-dd94f59b7- deployment-6204 91ea8e18-d028-4dd9-b234-6f7bb498243a 8080 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aad4e0 0xc000aad4e1}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.972: INFO: Pod "webserver-deployment-dd94f59b7-vqbrm" is not available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-vqbrm webserver-deployment-dd94f59b7- deployment-6204 3238f1d7-104b-4c28-a516-f595cd2d648b 8078 0 2023-01-11 14:50:41 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aad8f7 0xc000aad8f8}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:41 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-cvzb96,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.973: INFO: Pod "webserver-deployment-dd94f59b7-wjqsk" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wjqsk webserver-deployment-dd94f59b7- deployment-6204 168a6a49-3c23-4dd9-92f2-0e9af64cdbaa 7949 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aadac0 0xc000aadac1}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.21\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.21,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b52286a8b96eefc172442270c3530ca5e4c6a82c35b5e3eb75c5c326b257b57f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.21,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.973: INFO: Pod "webserver-deployment-dd94f59b7-wrkhm" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wrkhm webserver-deployment-dd94f59b7- deployment-6204 504b1666-4a48-429b-8ee5-19705202b45b 7932 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000aadda0 0xc000aadda1}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.42,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e98058f31f3a7a9758d6de6e858d4ee2ebc5c303e4236bd726a8918434f487c2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.973: INFO: Pod "webserver-deployment-dd94f59b7-wwch5" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-wwch5 webserver-deployment-dd94f59b7- deployment-6204 71808860-cfa2-42f1-84af-515e9ca1b745 7926 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000988020 0xc000988021}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.1.28,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e3e42081826cc6a6129e3b656ad87d1ea9e444984f3a2f605f024411d92092e3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.974: INFO: Pod "webserver-deployment-dd94f59b7-zpzc8" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zpzc8 webserver-deployment-dd94f59b7- deployment-6204 6e5621f4-0f8a-4b78-9976-a0c79d569747 7935 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000988280 0xc000988281}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.43,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ca73a8cb6f6b902dbde82c248afbce7afb3cebc20ec1a3f98a916f7775e4bb2a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:50:41.975: INFO: Pod "webserver-deployment-dd94f59b7-zw2b9" is available: &Pod{ObjectMeta:{webserver-deployment-dd94f59b7-zw2b9 webserver-deployment-dd94f59b7- deployment-6204 16e5aaa0-7c34-42e6-baa1-d3161fb8cca3 7928 0 2023-01-11 14:50:35 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:dd94f59b7] map[] [{apps/v1 ReplicaSet webserver-deployment-dd94f59b7 00ea8d15-6448-469a-a086-86d1001aabf0 0xc000988430 0xc000988431}] [] [{kube-controller-manager Update v1 2023-01-11 14:50:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"00ea8d15-6448-469a-a086-86d1001aabf0\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:50:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.41\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zglj7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zglj7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zglj7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:50:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.41,StartTime:2023-01-11 14:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7decd5f020b8993e8b0dcec5cf0d8b20a321e0f19c43fb61eec1accf47631e73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.41,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:41.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6204" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":19,"skipped":189,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:42.157: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-04627399-7716-48da-9238-dee1ea101e2a �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:50:42.239: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3" in namespace "projected-2317" to be "Succeeded or Failed" Jan 11 14:50:42.248: INFO: Pod "pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500496ms Jan 11 14:50:44.259: INFO: Pod "pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019070624s Jan 11 14:50:46.269: INFO: Pod "pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029201387s �[1mSTEP�[0m: Saw pod success Jan 11 14:50:46.269: INFO: Pod "pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3" satisfied condition "Succeeded or Failed" Jan 11 14:50:46.274: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:50:46.326: INFO: Waiting for pod pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3 to disappear Jan 11 14:50:46.330: INFO: Pod pod-projected-secrets-be4e70ac-0538-417b-9a1c-fdac3f5d88e3 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:46.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2317" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":211,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:53.339: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:50:53.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-4920" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":751,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:53.459: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:50:53.491: INFO: Creating ReplicaSet my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f Jan 11 14:50:53.497: INFO: Pod name my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f: Found 0 pods out of 1 Jan 11 14:50:58.504: INFO: Pod name my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f: Found 1 pods out of 1 Jan 11 14:50:58.504: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f" is running Jan 11 14:50:58.506: INFO: Pod "my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f-lccdg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 14:50:53 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 14:50:54 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 14:50:54 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-11 14:50:53 +0000 UTC Reason: Message:}]) Jan 11 14:50:58.507: INFO: Trying to dial the pod Jan 11 14:51:03.516: INFO: Controller my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f: Got expected result from replica 1 [my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f-lccdg]: "my-hostname-basic-5f6f2634-d903-4401-9d13-53d70fcb3c2f-lccdg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:03.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-6791" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":37,"skipped":788,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:03.526: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 11 14:51:03.560: INFO: Waiting up to 5m0s for pod "downward-api-bceb7a16-551e-466b-9d14-15354301e960" in namespace "downward-api-9920" to be "Succeeded or Failed" Jan 11 14:51:03.563: INFO: Pod "downward-api-bceb7a16-551e-466b-9d14-15354301e960": Phase="Pending", Reason="", readiness=false. Elapsed: 2.702626ms Jan 11 14:51:05.566: INFO: Pod "downward-api-bceb7a16-551e-466b-9d14-15354301e960": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006204964s �[1mSTEP�[0m: Saw pod success Jan 11 14:51:05.566: INFO: Pod "downward-api-bceb7a16-551e-466b-9d14-15354301e960" satisfied condition "Succeeded or Failed" Jan 11 14:51:05.569: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod downward-api-bceb7a16-551e-466b-9d14-15354301e960 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:51:05.583: INFO: Waiting for pod downward-api-bceb7a16-551e-466b-9d14-15354301e960 to disappear Jan 11 14:51:05.585: INFO: Pod downward-api-bceb7a16-551e-466b-9d14-15354301e960 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:05.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9920" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":788,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:50:46.357: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a replication controller Jan 11 14:50:46.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 create -f -' Jan 11 14:50:47.850: INFO: stderr: "" Jan 11 14:50:47.850: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 11 14:50:47.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 14:50:48.110: INFO: stderr: "" Jan 11 14:50:48.110: INFO: stdout: "update-demo-nautilus-9cjvf update-demo-nautilus-xqb4w " Jan 11 14:50:48.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-9cjvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 14:50:48.501: INFO: stderr: "" Jan 11 14:50:48.501: INFO: stdout: "" Jan 11 14:50:48.501: INFO: update-demo-nautilus-9cjvf is created but not running Jan 11 14:50:53.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 14:50:53.619: INFO: stderr: "" Jan 11 14:50:53.619: INFO: stdout: "update-demo-nautilus-9cjvf update-demo-nautilus-xqb4w " Jan 11 14:50:53.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-9cjvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 14:50:53.712: INFO: stderr: "" Jan 11 14:50:53.712: INFO: stdout: "true" Jan 11 14:50:53.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-9cjvf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 14:50:53.810: INFO: stderr: "" Jan 11 14:50:53.810: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 14:50:53.810: INFO: validating pod update-demo-nautilus-9cjvf Jan 11 14:50:53.814: INFO: got data: { "image": "nautilus.jpg" } Jan 11 14:50:53.814: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 14:50:53.814: INFO: update-demo-nautilus-9cjvf is verified up and running Jan 11 14:50:53.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-xqb4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 14:50:53.915: INFO: stderr: "" Jan 11 14:50:53.915: INFO: stdout: "true" Jan 11 14:50:53.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-xqb4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 14:50:54.016: INFO: stderr: "" Jan 11 14:50:54.016: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 14:50:54.016: INFO: validating pod update-demo-nautilus-xqb4w Jan 11 14:50:54.022: INFO: got data: { "image": "nautilus.jpg" } Jan 11 14:50:54.022: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 14:50:54.022: INFO: update-demo-nautilus-xqb4w is verified up and running �[1mSTEP�[0m: scaling down the replication controller Jan 11 14:50:54.026: INFO: scanned /root for discovery docs: <nil> Jan 11 14:50:54.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 11 14:50:55.146: INFO: stderr: "" Jan 11 14:50:55.146: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 11 14:50:55.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 14:50:55.238: INFO: stderr: "" Jan 11 14:50:55.239: INFO: stdout: "update-demo-nautilus-9cjvf update-demo-nautilus-xqb4w " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 11 14:51:00.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 14:51:00.335: INFO: stderr: "" Jan 11 14:51:00.335: INFO: stdout: "update-demo-nautilus-xqb4w " Jan 11 14:51:00.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-xqb4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 14:51:00.427: INFO: stderr: "" Jan 11 14:51:00.427: INFO: stdout: "true" Jan 11 14:51:00.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-xqb4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 14:51:00.518: INFO: stderr: "" Jan 11 14:51:00.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 14:51:00.518: INFO: validating pod update-demo-nautilus-xqb4w Jan 11 14:51:00.521: INFO: got data: { "image": "nautilus.jpg" } Jan 11 14:51:00.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 14:51:00.521: INFO: update-demo-nautilus-xqb4w is verified up and running �[1mSTEP�[0m: scaling up the replication controller Jan 11 14:51:00.522: INFO: scanned /root for discovery docs: <nil> Jan 11 14:51:00.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 11 14:51:01.631: INFO: stderr: "" Jan 11 14:51:01.631: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 11 14:51:01.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 14:51:01.728: INFO: stderr: "" Jan 11 14:51:01.728: INFO: stdout: "update-demo-nautilus-sxbss update-demo-nautilus-xqb4w " Jan 11 14:51:01.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-sxbss -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 14:51:01.835: INFO: stderr: "" Jan 11 14:51:01.835: INFO: stdout: "" Jan 11 14:51:01.835: INFO: update-demo-nautilus-sxbss is created but not running Jan 11 14:51:06.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 11 14:51:06.960: INFO: stderr: "" Jan 11 14:51:06.960: INFO: stdout: "update-demo-nautilus-sxbss update-demo-nautilus-xqb4w " Jan 11 14:51:06.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-sxbss -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 14:51:07.071: INFO: stderr: "" Jan 11 14:51:07.071: INFO: stdout: "true" Jan 11 14:51:07.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-sxbss -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 14:51:07.167: INFO: stderr: "" Jan 11 14:51:07.167: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 14:51:07.167: INFO: validating pod update-demo-nautilus-sxbss Jan 11 14:51:07.172: INFO: got data: { "image": "nautilus.jpg" } Jan 11 14:51:07.172: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 14:51:07.172: INFO: update-demo-nautilus-sxbss is verified up and running Jan 11 14:51:07.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-xqb4w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 11 14:51:07.278: INFO: stderr: "" Jan 11 14:51:07.278: INFO: stdout: "true" Jan 11 14:51:07.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods update-demo-nautilus-xqb4w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 11 14:51:07.385: INFO: stderr: "" Jan 11 14:51:07.385: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 11 14:51:07.385: INFO: validating pod update-demo-nautilus-xqb4w Jan 11 14:51:07.389: INFO: got data: { "image": "nautilus.jpg" } Jan 11 14:51:07.390: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 11 14:51:07.390: INFO: update-demo-nautilus-xqb4w is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 11 14:51:07.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 delete --grace-period=0 --force -f -' Jan 11 14:51:07.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 11 14:51:07.502: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 11 14:51:07.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get rc,svc -l name=update-demo --no-headers' Jan 11 14:51:07.631: INFO: stderr: "No resources found in kubectl-9064 namespace.\n" Jan 11 14:51:07.631: INFO: stdout: "" Jan 11 14:51:07.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9064 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 11 14:51:07.751: INFO: stderr: "" Jan 11 14:51:07.751: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:07.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9064" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":21,"skipped":216,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:05.633: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:51:06.113: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:51:09.134: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:51:09.138: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-3037-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:10.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6246" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6246-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":39,"skipped":817,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":14,"skipped":379,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:45:04.347: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-6057 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-6057 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-6057 Jan 11 14:45:04.400: INFO: Found 0 stateful pods, waiting for 1 Jan 11 14:45:14.404: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 11 14:45:14.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 14:45:14.586: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 14:45:14.586: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 14:45:14.586: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 14:45:14.589: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 11 14:45:24.593: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 14:45:24.593: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 14:45:24.605: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:45:24.605: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:14 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:45:24.605: INFO: Jan 11 14:45:24.605: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 11 14:45:25.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996897751s Jan 11 14:45:26.612: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99340603s Jan 11 14:45:27.616: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989948643s Jan 11 14:45:28.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.985957797s Jan 11 14:45:29.623: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.982594195s Jan 11 14:45:30.628: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.978579525s Jan 11 14:45:31.632: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.973748686s Jan 11 14:45:32.636: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.969810742s Jan 11 14:45:33.640: INFO: Verifying statefulset ss doesn't scale past 3 for another 965.856386ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6057 Jan 11 14:45:34.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:45:34.808: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 11 14:45:34.808: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 14:45:34.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 14:45:34.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:45:34.970: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 11 14:45:34.970: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 14:45:34.970: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 14:45:34.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:45:35.137: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 11 14:45:35.137: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 11 14:45:35.137: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 11 14:45:35.141: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 11 14:45:45.145: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 11 14:45:45.145: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 11 14:45:45.145: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 11 14:45:45.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 14:45:45.319: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 14:45:45.319: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 14:45:45.319: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 14:45:45.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 14:45:45.496: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 14:45:45.496: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 14:45:45.496: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 14:45:45.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 14:45:45.719: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 14:45:45.719: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 14:45:45.719: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 14:45:45.719: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 14:45:45.722: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 11 14:45:55.728: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 14:45:55.728: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 11 14:45:55.728: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 11 14:45:55.738: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:45:55.738: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:45:55.738: INFO: ss-1 k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:55.738: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:55.738: INFO: Jan 11 14:45:55.738: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 14:45:56.741: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:45:56.741: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:45:56.741: INFO: ss-1 k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:56.741: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:56.741: INFO: Jan 11 14:45:56.741: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 14:45:57.746: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:45:57.746: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:45:57.746: INFO: ss-1 k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:57.746: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:57.746: INFO: Jan 11 14:45:57.746: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 14:45:58.751: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:45:58.751: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:45:58.751: INFO: ss-1 k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:58.751: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:58.751: INFO: Jan 11 14:45:58.751: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 14:45:59.754: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:45:59.755: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:45:59.755: INFO: ss-1 k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:59.755: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:45:59.755: INFO: Jan 11 14:45:59.755: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 11 14:46:00.759: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:46:00.759: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:46:00.759: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:46:00.759: INFO: Jan 11 14:46:00.759: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 11 14:46:01.763: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:46:01.763: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:46:01.763: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:46:01.763: INFO: Jan 11 14:46:01.763: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 11 14:46:02.767: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:46:02.767: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:46:02.767: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:46:02.767: INFO: Jan 11 14:46:02.767: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 11 14:46:03.771: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:46:03.771: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:46:03.771: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:46:03.771: INFO: Jan 11 14:46:03.771: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 11 14:46:04.775: INFO: POD NODE PHASE GRACE CONDITIONS Jan 11 14:46:04.775: INFO: ss-0 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:45 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:04 +0000 UTC }] Jan 11 14:46:04.775: INFO: ss-2 k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:46 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:45:24 +0000 UTC }] Jan 11 14:46:04.775: INFO: Jan 11 14:46:04.775: INFO: StatefulSet ss has not reached scale 0, at 2 �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6057 Jan 11 14:46:05.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:46:05.899: INFO: rc: 1 Jan 11 14:46:05.899: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 11 14:46:15.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:46:15.988: INFO: rc: 1 Jan 11 14:46:15.988: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:46:25.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:46:26.080: INFO: rc: 1 Jan 11 14:46:26.080: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:46:36.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:46:36.178: INFO: rc: 1 Jan 11 14:46:36.178: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:46:46.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:46:46.269: INFO: rc: 1 Jan 11 14:46:46.269: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:46:56.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:46:56.353: INFO: rc: 1 Jan 11 14:46:56.353: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:47:06.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:47:06.459: INFO: rc: 1 Jan 11 14:47:06.459: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:47:16.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:47:16.631: INFO: rc: 1 Jan 11 14:47:16.631: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:47:26.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:47:26.856: INFO: rc: 1 Jan 11 14:47:26.856: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:47:36.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:47:37.022: INFO: rc: 1 Jan 11 14:47:37.023: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:47:47.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:47:47.245: INFO: rc: 1 Jan 11 14:47:47.245: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:47:57.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:47:57.437: INFO: rc: 1 Jan 11 14:47:57.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:48:07.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:48:17.593: INFO: rc: 1 Jan 11 14:48:17.593: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Unable to connect to the server: net/http: TLS handshake timeout error: exit status 1 Jan 11 14:48:27.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:48:37.765: INFO: rc: 1 Jan 11 14:48:37.765: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get pods ss-0) error: exit status 1 Jan 11 14:48:47.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:48:57.932: INFO: rc: 1 Jan 11 14:48:57.932: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (InternalError): an error on the server ("") has prevented the request from succeeding (get pods ss-0) error: exit status 1 Jan 11 14:49:07.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:49:08.102: INFO: rc: 1 Jan 11 14:49:08.102: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:49:18.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:49:18.278: INFO: rc: 1 Jan 11 14:49:18.278: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:49:28.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:49:28.620: INFO: rc: 1 Jan 11 14:49:28.621: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:49:38.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:49:38.839: INFO: rc: 1 Jan 11 14:49:38.839: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:49:48.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:49:49.010: INFO: rc: 1 Jan 11 14:49:49.010: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:49:59.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:49:59.199: INFO: rc: 1 Jan 11 14:49:59.199: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:50:09.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:50:09.389: INFO: rc: 1 Jan 11 14:50:09.389: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:50:19.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:50:19.578: INFO: rc: 1 Jan 11 14:50:19.578: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:50:29.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:50:29.771: INFO: rc: 1 Jan 11 14:50:29.771: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:50:39.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:50:40.025: INFO: rc: 1 Jan 11 14:50:40.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:50:50.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:50:50.143: INFO: rc: 1 Jan 11 14:50:50.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:51:00.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:51:00.255: INFO: rc: 1 Jan 11 14:51:00.255: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 11 14:51:10.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6057 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 11 14:51:10.491: INFO: rc: 1 Jan 11 14:51:10.492: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 11 14:51:10.492: INFO: Scaling statefulset ss to 0 Jan 11 14:51:10.516: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 14:51:10.524: INFO: Deleting all statefulset in ns statefulset-6057 Jan 11 14:51:10.530: INFO: Scaling statefulset ss to 0 Jan 11 14:51:10.541: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 14:51:10.545: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:10.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6057" for this suite. �[32m• [SLOW TEST:366.221 seconds]�[0m [sig-apps] StatefulSet �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":15,"skipped":379,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:10.589: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:51:10.665: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7344acd-d37a-47cb-ba20-0fa41bfe2eef" in namespace "downward-api-2021" to be "Succeeded or Failed" Jan 11 14:51:10.675: INFO: Pod "downwardapi-volume-f7344acd-d37a-47cb-ba20-0fa41bfe2eef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.474294ms Jan 11 14:51:12.680: INFO: Pod "downwardapi-volume-f7344acd-d37a-47cb-ba20-0fa41bfe2eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01566746s �[1mSTEP�[0m: Saw pod success Jan 11 14:51:12.681: INFO: Pod "downwardapi-volume-f7344acd-d37a-47cb-ba20-0fa41bfe2eef" satisfied condition "Succeeded or Failed" Jan 11 14:51:12.685: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod downwardapi-volume-f7344acd-d37a-47cb-ba20-0fa41bfe2eef container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:51:12.703: INFO: Waiting for pod downwardapi-volume-f7344acd-d37a-47cb-ba20-0fa41bfe2eef to disappear Jan 11 14:51:12.706: INFO: Pod downwardapi-volume-f7344acd-d37a-47cb-ba20-0fa41bfe2eef no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:12.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2021" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":385,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:10.349: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8979.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8979.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8979.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8979.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8979.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8979.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 14:51:18.512: INFO: DNS probes using dns-8979/dns-test-01ff1ec6-f498-40c7-8f34-e95be9af5e71 succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:18.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8979" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":40,"skipped":827,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:18.644: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:19.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-388" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":41,"skipped":895,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:12.804: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Jan 11 14:51:15.379: INFO: Successfully updated pod "adopt-release-c8k68" �[1mSTEP�[0m: Checking that the Job readopts the Pod Jan 11 14:51:15.380: INFO: Waiting up to 15m0s for pod "adopt-release-c8k68" in namespace "job-91" to be "adopted" Jan 11 14:51:15.385: INFO: Pod "adopt-release-c8k68": Phase="Running", Reason="", readiness=true. Elapsed: 4.974246ms Jan 11 14:51:17.388: INFO: Pod "adopt-release-c8k68": Phase="Running", Reason="", readiness=true. Elapsed: 2.00868227s Jan 11 14:51:17.388: INFO: Pod "adopt-release-c8k68" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Jan 11 14:51:17.904: INFO: Successfully updated pod "adopt-release-c8k68" �[1mSTEP�[0m: Checking that the Job releases the Pod Jan 11 14:51:17.904: INFO: Waiting up to 15m0s for pod "adopt-release-c8k68" in namespace "job-91" to be "released" Jan 11 14:51:17.908: INFO: Pod "adopt-release-c8k68": Phase="Running", Reason="", readiness=true. Elapsed: 4.38262ms Jan 11 14:51:19.912: INFO: Pod "adopt-release-c8k68": Phase="Running", Reason="", readiness=true. Elapsed: 2.007928533s Jan 11 14:51:19.912: INFO: Pod "adopt-release-c8k68" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:19.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-91" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":17,"skipped":431,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:19.922: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:51:20.351: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:51:23.375: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource definition that should be denied by the webhook Jan 11 14:51:23.394: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:23.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4124" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4124-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":18,"skipped":431,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:19.268: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:51:19.349: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 11 14:51:21.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4042 --namespace=crd-publish-openapi-4042 create -f -' Jan 11 14:51:22.759: INFO: stderr: "" Jan 11 14:51:22.759: INFO: stdout: "e2e-test-crd-publish-openapi-3074-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 14:51:22.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4042 --namespace=crd-publish-openapi-4042 delete e2e-test-crd-publish-openapi-3074-crds test-cr' Jan 11 14:51:22.854: INFO: stderr: "" Jan 11 14:51:22.854: INFO: stdout: "e2e-test-crd-publish-openapi-3074-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 11 14:51:22.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4042 --namespace=crd-publish-openapi-4042 apply -f -' Jan 11 14:51:23.079: INFO: stderr: "" Jan 11 14:51:23.079: INFO: stdout: "e2e-test-crd-publish-openapi-3074-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 11 14:51:23.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4042 --namespace=crd-publish-openapi-4042 delete e2e-test-crd-publish-openapi-3074-crds test-cr' Jan 11 14:51:23.168: INFO: stderr: "" Jan 11 14:51:23.168: INFO: stdout: "e2e-test-crd-publish-openapi-3074-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 11 14:51:23.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4042 explain e2e-test-crd-publish-openapi-3074-crds' Jan 11 14:51:23.375: INFO: stderr: "" Jan 11 14:51:23.375: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3074-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:25.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-4042" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":42,"skipped":909,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:25.753: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:51:25.784: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df99d94e-0e3c-4cf2-a4f1-68293e85c8c2" in namespace "projected-8463" to be "Succeeded or Failed" Jan 11 14:51:25.787: INFO: Pod "downwardapi-volume-df99d94e-0e3c-4cf2-a4f1-68293e85c8c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217515ms Jan 11 14:51:27.790: INFO: Pod "downwardapi-volume-df99d94e-0e3c-4cf2-a4f1-68293e85c8c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00609498s �[1mSTEP�[0m: Saw pod success Jan 11 14:51:27.790: INFO: Pod "downwardapi-volume-df99d94e-0e3c-4cf2-a4f1-68293e85c8c2" satisfied condition "Succeeded or Failed" Jan 11 14:51:27.793: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod downwardapi-volume-df99d94e-0e3c-4cf2-a4f1-68293e85c8c2 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:51:27.805: INFO: Waiting for pod downwardapi-volume-df99d94e-0e3c-4cf2-a4f1-68293e85c8c2 to disappear Jan 11 14:51:27.809: INFO: Pod downwardapi-volume-df99d94e-0e3c-4cf2-a4f1-68293e85c8c2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:27.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8463" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":916,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:27.883: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-246b4df1-8e67-4cef-bfda-6dcf33144a6d �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 14:51:27.916: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f6b42c25-e9df-4539-b92d-8b172610182b" in namespace "projected-4196" to be "Succeeded or Failed" Jan 11 14:51:27.919: INFO: Pod "pod-projected-configmaps-f6b42c25-e9df-4539-b92d-8b172610182b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.340864ms Jan 11 14:51:29.923: INFO: Pod "pod-projected-configmaps-f6b42c25-e9df-4539-b92d-8b172610182b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006084372s �[1mSTEP�[0m: Saw pod success Jan 11 14:51:29.923: INFO: Pod "pod-projected-configmaps-f6b42c25-e9df-4539-b92d-8b172610182b" satisfied condition "Succeeded or Failed" Jan 11 14:51:29.925: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-projected-configmaps-f6b42c25-e9df-4539-b92d-8b172610182b container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:51:29.939: INFO: Waiting for pod pod-projected-configmaps-f6b42c25-e9df-4539-b92d-8b172610182b to disappear Jan 11 14:51:29.943: INFO: Pod pod-projected-configmaps-f6b42c25-e9df-4539-b92d-8b172610182b no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:29.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4196" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:23.568: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating Agnhost RC Jan 11 14:51:23.597: INFO: namespace kubectl-7141 Jan 11 14:51:23.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7141 create -f -' Jan 11 14:51:23.856: INFO: stderr: "" Jan 11 14:51:23.856: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 11 14:51:24.860: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 14:51:24.860: INFO: Found 0 / 1 Jan 11 14:51:25.859: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 14:51:25.859: INFO: Found 1 / 1 Jan 11 14:51:25.859: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 11 14:51:25.862: INFO: Selector matched 1 pods for map[app:agnhost] Jan 11 14:51:25.862: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 11 14:51:25.862: INFO: wait on agnhost-primary startup in kubectl-7141 Jan 11 14:51:25.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7141 logs agnhost-primary-h67w2 agnhost-primary' Jan 11 14:51:25.956: INFO: stderr: "" Jan 11 14:51:25.956: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 11 14:51:25.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7141 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 11 14:51:26.076: INFO: stderr: "" Jan 11 14:51:26.076: INFO: stdout: "service/rm2 exposed\n" Jan 11 14:51:26.082: INFO: Service rm2 in namespace kubectl-7141 found. �[1mSTEP�[0m: exposing service Jan 11 14:51:28.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7141 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 11 14:51:28.196: INFO: stderr: "" Jan 11 14:51:28.196: INFO: stdout: "service/rm3 exposed\n" Jan 11 14:51:28.202: INFO: Service rm3 in namespace kubectl-7141 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:30.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7141" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":19,"skipped":465,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":966,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:29.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod Jan 11 14:51:29.984: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:33.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-4617" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":45,"skipped":966,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:33.328: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating secret secrets-2189/secret-test-0196e57b-42c3-4e4e-935b-fbb195aada66 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:51:33.370: INFO: Waiting up to 5m0s for pod "pod-configmaps-936b5b18-5353-47ca-b7d0-e23e1209462e" in namespace "secrets-2189" to be "Succeeded or Failed" Jan 11 14:51:33.373: INFO: Pod "pod-configmaps-936b5b18-5353-47ca-b7d0-e23e1209462e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.462508ms Jan 11 14:51:35.376: INFO: Pod "pod-configmaps-936b5b18-5353-47ca-b7d0-e23e1209462e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006484095s �[1mSTEP�[0m: Saw pod success Jan 11 14:51:35.376: INFO: Pod "pod-configmaps-936b5b18-5353-47ca-b7d0-e23e1209462e" satisfied condition "Succeeded or Failed" Jan 11 14:51:35.379: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod pod-configmaps-936b5b18-5353-47ca-b7d0-e23e1209462e container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:51:35.401: INFO: Waiting for pod pod-configmaps-936b5b18-5353-47ca-b7d0-e23e1209462e to disappear Jan 11 14:51:35.404: INFO: Pod pod-configmaps-936b5b18-5353-47ca-b7d0-e23e1209462e no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:35.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2189" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":1022,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:35.494: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Jan 11 14:51:35.639: INFO: Waiting up to 5m0s for pod "pod-2bf4e34a-bd77-4bda-af65-830bd7841dbd" in namespace "emptydir-2052" to be "Succeeded or Failed" Jan 11 14:51:35.643: INFO: Pod "pod-2bf4e34a-bd77-4bda-af65-830bd7841dbd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.506126ms Jan 11 14:51:37.646: INFO: Pod "pod-2bf4e34a-bd77-4bda-af65-830bd7841dbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007221466s �[1mSTEP�[0m: Saw pod success Jan 11 14:51:37.646: INFO: Pod "pod-2bf4e34a-bd77-4bda-af65-830bd7841dbd" satisfied condition "Succeeded or Failed" Jan 11 14:51:37.649: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod pod-2bf4e34a-bd77-4bda-af65-830bd7841dbd container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:51:37.668: INFO: Waiting for pod pod-2bf4e34a-bd77-4bda-af65-830bd7841dbd to disappear Jan 11 14:51:37.670: INFO: Pod pod-2bf4e34a-bd77-4bda-af65-830bd7841dbd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:37.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2052" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":1061,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:07.838: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1710 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1710;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1710 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1710;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1710.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1710.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1710.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1710.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.113.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.113.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.113.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.113.17_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1710 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1710;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1710 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1710;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1710.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1710.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1710.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1710.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1710.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1710.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1710.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 17.113.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.113.17_udp@PTR;check="$$(dig +tcp +noall +answer +search 17.113.143.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.143.113.17_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 14:51:15.925: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.928: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.932: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.936: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.942: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.948: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.952: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.955: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.980: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.983: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.987: INFO: Unable to read jessie_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.993: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:15.997: INFO: Unable to read jessie_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:16.001: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:16.004: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:16.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:16.030: INFO: Lookups using dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1710 wheezy_tcp@dns-test-service.dns-1710 wheezy_udp@dns-test-service.dns-1710.svc wheezy_tcp@dns-test-service.dns-1710.svc wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1710 jessie_tcp@dns-test-service.dns-1710 jessie_udp@dns-test-service.dns-1710.svc jessie_tcp@dns-test-service.dns-1710.svc jessie_udp@_http._tcp.dns-test-service.dns-1710.svc jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc] Jan 11 14:51:21.036: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.040: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.044: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.047: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.050: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.053: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.096: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.101: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.104: INFO: Unable to read jessie_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.110: INFO: Unable to read jessie_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.113: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.116: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.119: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:21.136: INFO: Lookups using dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1710 wheezy_tcp@dns-test-service.dns-1710 wheezy_udp@dns-test-service.dns-1710.svc wheezy_tcp@dns-test-service.dns-1710.svc wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1710 jessie_tcp@dns-test-service.dns-1710 jessie_udp@dns-test-service.dns-1710.svc jessie_tcp@dns-test-service.dns-1710.svc jessie_udp@_http._tcp.dns-test-service.dns-1710.svc jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc] Jan 11 14:51:26.033: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.038: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.041: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.045: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.049: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.052: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.055: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.057: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.096: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.099: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.104: INFO: Unable to read jessie_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.114: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.120: INFO: Unable to read jessie_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.126: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.130: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.141: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:26.162: INFO: Lookups using dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1710 wheezy_tcp@dns-test-service.dns-1710 wheezy_udp@dns-test-service.dns-1710.svc wheezy_tcp@dns-test-service.dns-1710.svc wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1710 jessie_tcp@dns-test-service.dns-1710 jessie_udp@dns-test-service.dns-1710.svc jessie_tcp@dns-test-service.dns-1710.svc jessie_udp@_http._tcp.dns-test-service.dns-1710.svc jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc] Jan 11 14:51:31.036: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.039: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.042: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.045: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.048: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.051: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.054: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.058: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.080: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.083: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.086: INFO: Unable to read jessie_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.093: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.096: INFO: Unable to read jessie_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.099: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.102: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.105: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:31.124: INFO: Lookups using dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1710 wheezy_tcp@dns-test-service.dns-1710 wheezy_udp@dns-test-service.dns-1710.svc wheezy_tcp@dns-test-service.dns-1710.svc wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1710 jessie_tcp@dns-test-service.dns-1710 jessie_udp@dns-test-service.dns-1710.svc jessie_tcp@dns-test-service.dns-1710.svc jessie_udp@_http._tcp.dns-test-service.dns-1710.svc jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc] Jan 11 14:51:36.034: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.037: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.040: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.043: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.045: INFO: Unable to read wheezy_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.048: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.050: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.053: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.080: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.083: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.086: INFO: Unable to read jessie_udp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.090: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710 from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.093: INFO: Unable to read jessie_udp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.096: INFO: Unable to read jessie_tcp@dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.099: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.102: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc from pod dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75: the server could not find the requested resource (get pods dns-test-69569b21-5386-4844-84de-6a0312db8e75) Jan 11 14:51:36.133: INFO: Lookups using dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1710 wheezy_tcp@dns-test-service.dns-1710 wheezy_udp@dns-test-service.dns-1710.svc wheezy_tcp@dns-test-service.dns-1710.svc wheezy_udp@_http._tcp.dns-test-service.dns-1710.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1710.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1710 jessie_tcp@dns-test-service.dns-1710 jessie_udp@dns-test-service.dns-1710.svc jessie_tcp@dns-test-service.dns-1710.svc jessie_udp@_http._tcp.dns-test-service.dns-1710.svc jessie_tcp@_http._tcp.dns-test-service.dns-1710.svc] Jan 11 14:51:41.130: INFO: DNS probes using dns-1710/dns-test-69569b21-5386-4844-84de-6a0312db8e75 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:41.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-1710" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":273,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:30.222: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:51:30.248: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-1149 I0111 14:51:30.262983 16 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1149, replica count: 1 I0111 14:51:31.313353 16 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0111 14:51:32.313599 16 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:51:32.424: INFO: Created: latency-svc-dsln2 Jan 11 14:51:32.439: INFO: Got endpoints: latency-svc-dsln2 [25.8201ms] Jan 11 14:51:32.459: INFO: Created: latency-svc-n6l6p Jan 11 14:51:32.467: INFO: Got endpoints: latency-svc-n6l6p [27.305905ms] Jan 11 14:51:32.472: INFO: Created: latency-svc-lnmjq Jan 11 14:51:32.477: INFO: Got endpoints: latency-svc-lnmjq [36.905944ms] Jan 11 14:51:32.490: INFO: Created: latency-svc-j2lwc Jan 11 14:51:32.503: INFO: Created: latency-svc-bhs8l Jan 11 14:51:32.504: INFO: Got endpoints: latency-svc-j2lwc [63.509822ms] Jan 11 14:51:32.513: INFO: Got endpoints: latency-svc-bhs8l [72.149135ms] Jan 11 14:51:32.526: INFO: Created: latency-svc-prnr4 Jan 11 14:51:32.544: INFO: Got endpoints: latency-svc-prnr4 [103.085421ms] Jan 11 14:51:32.552: INFO: Created: latency-svc-lvd5q Jan 11 14:51:32.555: INFO: Got endpoints: latency-svc-lvd5q [114.189932ms] Jan 11 14:51:32.567: INFO: Created: latency-svc-25cbj Jan 11 14:51:32.574: INFO: Got endpoints: latency-svc-25cbj [133.227379ms] Jan 11 14:51:32.588: INFO: Created: latency-svc-sl4zz Jan 11 14:51:32.601: INFO: Got endpoints: latency-svc-sl4zz [161.654801ms] Jan 11 14:51:32.610: INFO: Created: latency-svc-9p9lv Jan 11 14:51:32.614: INFO: Got endpoints: latency-svc-9p9lv [172.550146ms] Jan 11 14:51:32.622: INFO: Created: latency-svc-hjgv5 Jan 11 14:51:32.629: INFO: Got endpoints: latency-svc-hjgv5 [187.655742ms] Jan 11 14:51:32.632: INFO: Created: latency-svc-bdkxq Jan 11 14:51:32.634: INFO: Got endpoints: latency-svc-bdkxq [20.229089ms] Jan 11 14:51:32.642: INFO: Created: latency-svc-v82mn Jan 11 14:51:32.651: INFO: Got endpoints: latency-svc-v82mn [210.128121ms] Jan 11 14:51:32.653: INFO: Created: latency-svc-k44tv Jan 11 14:51:32.661: INFO: Created: latency-svc-rkd4t Jan 11 14:51:32.664: INFO: Got endpoints: latency-svc-k44tv [222.516714ms] Jan 11 14:51:32.667: INFO: Got endpoints: latency-svc-rkd4t [225.079662ms] Jan 11 14:51:32.674: INFO: Created: latency-svc-wmblv Jan 11 14:51:32.683: INFO: Got endpoints: latency-svc-wmblv [241.534666ms] Jan 11 14:51:32.684: INFO: Created: latency-svc-c4wbs Jan 11 14:51:32.693: INFO: Got endpoints: latency-svc-c4wbs [253.789864ms] Jan 11 14:51:32.695: INFO: Created: latency-svc-l8vsz Jan 11 14:51:32.704: INFO: Got endpoints: latency-svc-l8vsz [236.725325ms] Jan 11 14:51:32.713: INFO: Created: latency-svc-l5ffm Jan 11 14:51:32.720: INFO: Got endpoints: latency-svc-l5ffm [243.315227ms] Jan 11 14:51:32.724: INFO: Created: latency-svc-j8t8z Jan 11 14:51:32.727: INFO: Got endpoints: latency-svc-j8t8z [223.609146ms] Jan 11 14:51:32.734: INFO: Created: latency-svc-sdpwc Jan 11 14:51:32.739: INFO: Got endpoints: latency-svc-sdpwc [226.343592ms] Jan 11 14:51:32.746: INFO: Created: latency-svc-bpzql Jan 11 14:51:32.748: INFO: Got endpoints: latency-svc-bpzql [204.331305ms] Jan 11 14:51:32.760: INFO: Created: latency-svc-7pcbj Jan 11 14:51:32.764: INFO: Got endpoints: latency-svc-7pcbj [208.647679ms] Jan 11 14:51:32.769: INFO: Created: latency-svc-cc7xd Jan 11 14:51:32.775: INFO: Got endpoints: latency-svc-cc7xd [201.115946ms] Jan 11 14:51:32.776: INFO: Created: latency-svc-jnknc Jan 11 14:51:32.781: INFO: Got endpoints: latency-svc-jnknc [180.384988ms] Jan 11 14:51:32.789: INFO: Created: latency-svc-mg4cb Jan 11 14:51:32.795: INFO: Created: latency-svc-ttx64 Jan 11 14:51:32.795: INFO: Got endpoints: latency-svc-mg4cb [166.520218ms] Jan 11 14:51:32.800: INFO: Got endpoints: latency-svc-ttx64 [166.106328ms] Jan 11 14:51:32.807: INFO: Created: latency-svc-b7h8k Jan 11 14:51:32.811: INFO: Got endpoints: latency-svc-b7h8k [159.671398ms] Jan 11 14:51:32.819: INFO: Created: latency-svc-r5kx2 Jan 11 14:51:32.825: INFO: Got endpoints: latency-svc-r5kx2 [160.648824ms] Jan 11 14:51:32.839: INFO: Created: latency-svc-smskm Jan 11 14:51:32.850: INFO: Got endpoints: latency-svc-smskm [182.802416ms] Jan 11 14:51:32.886: INFO: Created: latency-svc-hwlpr Jan 11 14:51:32.891: INFO: Created: latency-svc-kbwgp Jan 11 14:51:32.896: INFO: Got endpoints: latency-svc-hwlpr [213.905665ms] Jan 11 14:51:32.899: INFO: Got endpoints: latency-svc-kbwgp [205.294003ms] Jan 11 14:51:32.904: INFO: Created: latency-svc-gsflz Jan 11 14:51:32.910: INFO: Got endpoints: latency-svc-gsflz [206.200958ms] Jan 11 14:51:32.912: INFO: Created: latency-svc-cwppf Jan 11 14:51:32.915: INFO: Got endpoints: latency-svc-cwppf [194.820886ms] Jan 11 14:51:32.924: INFO: Created: latency-svc-2f58m Jan 11 14:51:32.930: INFO: Got endpoints: latency-svc-2f58m [202.457719ms] Jan 11 14:51:32.933: INFO: Created: latency-svc-zxrvf Jan 11 14:51:32.938: INFO: Got endpoints: latency-svc-zxrvf [198.660074ms] Jan 11 14:51:32.939: INFO: Created: latency-svc-ft7rm Jan 11 14:51:32.947: INFO: Got endpoints: latency-svc-ft7rm [198.519406ms] Jan 11 14:51:32.949: INFO: Created: latency-svc-hftjj Jan 11 14:51:32.957: INFO: Got endpoints: latency-svc-hftjj [192.874377ms] Jan 11 14:51:32.960: INFO: Created: latency-svc-4v9mc Jan 11 14:51:32.967: INFO: Got endpoints: latency-svc-4v9mc [192.218454ms] Jan 11 14:51:32.968: INFO: Created: latency-svc-vh89c Jan 11 14:51:32.975: INFO: Got endpoints: latency-svc-vh89c [193.630731ms] Jan 11 14:51:32.978: INFO: Created: latency-svc-sh89h Jan 11 14:51:32.995: INFO: Got endpoints: latency-svc-sh89h [199.05593ms] Jan 11 14:51:32.997: INFO: Created: latency-svc-8vxxx Jan 11 14:51:33.004: INFO: Created: latency-svc-t2knf Jan 11 14:51:33.015: INFO: Created: latency-svc-c27tw Jan 11 14:51:33.026: INFO: Created: latency-svc-gjph8 Jan 11 14:51:33.037: INFO: Got endpoints: latency-svc-8vxxx [236.869782ms] Jan 11 14:51:33.039: INFO: Created: latency-svc-zzs9t Jan 11 14:51:33.050: INFO: Created: latency-svc-76j88 Jan 11 14:51:33.055: INFO: Created: latency-svc-9rqjt Jan 11 14:51:33.062: INFO: Created: latency-svc-g298m Jan 11 14:51:33.071: INFO: Created: latency-svc-g6sqr Jan 11 14:51:33.079: INFO: Created: latency-svc-9lqs8 Jan 11 14:51:33.086: INFO: Got endpoints: latency-svc-t2knf [275.214381ms] Jan 11 14:51:33.087: INFO: Created: latency-svc-779pd Jan 11 14:51:33.093: INFO: Created: latency-svc-nmbv4 Jan 11 14:51:33.113: INFO: Created: latency-svc-md5gx Jan 11 14:51:33.117: INFO: Created: latency-svc-59mv9 Jan 11 14:51:33.126: INFO: Created: latency-svc-8qwzz Jan 11 14:51:33.133: INFO: Created: latency-svc-26ss2 Jan 11 14:51:33.135: INFO: Got endpoints: latency-svc-c27tw [309.88746ms] Jan 11 14:51:33.143: INFO: Created: latency-svc-sc7bc Jan 11 14:51:33.150: INFO: Created: latency-svc-9wmh9 Jan 11 14:51:33.187: INFO: Got endpoints: latency-svc-gjph8 [337.350625ms] Jan 11 14:51:33.220: INFO: Created: latency-svc-f4v2p Jan 11 14:51:33.237: INFO: Got endpoints: latency-svc-zzs9t [340.096254ms] Jan 11 14:51:33.257: INFO: Created: latency-svc-sbckz Jan 11 14:51:33.286: INFO: Got endpoints: latency-svc-76j88 [386.893755ms] Jan 11 14:51:33.301: INFO: Created: latency-svc-dqzzb Jan 11 14:51:33.335: INFO: Got endpoints: latency-svc-9rqjt [424.279889ms] Jan 11 14:51:33.348: INFO: Created: latency-svc-fztrz Jan 11 14:51:33.383: INFO: Got endpoints: latency-svc-g298m [467.652213ms] Jan 11 14:51:33.393: INFO: Created: latency-svc-kb9dx Jan 11 14:51:33.434: INFO: Got endpoints: latency-svc-g6sqr [503.741725ms] Jan 11 14:51:33.448: INFO: Created: latency-svc-fwnnr Jan 11 14:51:33.490: INFO: Got endpoints: latency-svc-9lqs8 [552.629306ms] Jan 11 14:51:33.514: INFO: Created: latency-svc-vxhvx Jan 11 14:51:33.537: INFO: Got endpoints: latency-svc-779pd [590.001061ms] Jan 11 14:51:33.558: INFO: Created: latency-svc-nwl4p Jan 11 14:51:33.584: INFO: Got endpoints: latency-svc-nmbv4 [627.621902ms] Jan 11 14:51:33.597: INFO: Created: latency-svc-hbx9k Jan 11 14:51:33.640: INFO: Got endpoints: latency-svc-md5gx [672.62542ms] Jan 11 14:51:33.651: INFO: Created: latency-svc-t42fx Jan 11 14:51:33.685: INFO: Got endpoints: latency-svc-59mv9 [709.989978ms] Jan 11 14:51:33.696: INFO: Created: latency-svc-zdtbl Jan 11 14:51:33.735: INFO: Got endpoints: latency-svc-8qwzz [739.769631ms] Jan 11 14:51:33.749: INFO: Created: latency-svc-7rg9n Jan 11 14:51:33.784: INFO: Got endpoints: latency-svc-26ss2 [745.959687ms] Jan 11 14:51:33.796: INFO: Created: latency-svc-txrtt Jan 11 14:51:33.835: INFO: Got endpoints: latency-svc-sc7bc [748.966091ms] Jan 11 14:51:33.863: INFO: Created: latency-svc-97nt8 Jan 11 14:51:33.885: INFO: Got endpoints: latency-svc-9wmh9 [750.042204ms] Jan 11 14:51:33.896: INFO: Created: latency-svc-tbttv Jan 11 14:51:33.934: INFO: Got endpoints: latency-svc-f4v2p [746.930204ms] Jan 11 14:51:33.947: INFO: Created: latency-svc-wb7ms Jan 11 14:51:33.984: INFO: Got endpoints: latency-svc-sbckz [747.122434ms] Jan 11 14:51:33.996: INFO: Created: latency-svc-6jr8d Jan 11 14:51:34.034: INFO: Got endpoints: latency-svc-dqzzb [748.082986ms] Jan 11 14:51:34.047: INFO: Created: latency-svc-m4vcj Jan 11 14:51:34.084: INFO: Got endpoints: latency-svc-fztrz [749.512826ms] Jan 11 14:51:34.095: INFO: Created: latency-svc-dbp6t Jan 11 14:51:34.135: INFO: Got endpoints: latency-svc-kb9dx [752.052061ms] Jan 11 14:51:34.145: INFO: Created: latency-svc-wc6pn Jan 11 14:51:34.186: INFO: Got endpoints: latency-svc-fwnnr [752.434442ms] Jan 11 14:51:34.196: INFO: Created: latency-svc-qth6q Jan 11 14:51:34.234: INFO: Got endpoints: latency-svc-vxhvx [743.420508ms] Jan 11 14:51:34.244: INFO: Created: latency-svc-l2jtp Jan 11 14:51:34.286: INFO: Got endpoints: latency-svc-nwl4p [749.325057ms] Jan 11 14:51:34.296: INFO: Created: latency-svc-lrmsr Jan 11 14:51:34.333: INFO: Got endpoints: latency-svc-hbx9k [748.785061ms] Jan 11 14:51:34.345: INFO: Created: latency-svc-t6q4n Jan 11 14:51:34.383: INFO: Got endpoints: latency-svc-t42fx [743.2119ms] Jan 11 14:51:34.397: INFO: Created: latency-svc-pb2bs Jan 11 14:51:34.433: INFO: Got endpoints: latency-svc-zdtbl [748.162075ms] Jan 11 14:51:34.455: INFO: Created: latency-svc-fq7s5 Jan 11 14:51:34.485: INFO: Got endpoints: latency-svc-7rg9n [750.062829ms] Jan 11 14:51:34.521: INFO: Created: latency-svc-9vtfg Jan 11 14:51:34.537: INFO: Got endpoints: latency-svc-txrtt [753.108472ms] Jan 11 14:51:34.553: INFO: Created: latency-svc-b2kcz Jan 11 14:51:34.587: INFO: Got endpoints: latency-svc-97nt8 [751.330381ms] Jan 11 14:51:34.610: INFO: Created: latency-svc-c6frh Jan 11 14:51:34.634: INFO: Got endpoints: latency-svc-tbttv [749.076764ms] Jan 11 14:51:34.647: INFO: Created: latency-svc-fq94w Jan 11 14:51:34.684: INFO: Got endpoints: latency-svc-wb7ms [749.225528ms] Jan 11 14:51:34.701: INFO: Created: latency-svc-hkrt5 Jan 11 14:51:34.734: INFO: Got endpoints: latency-svc-6jr8d [749.87202ms] Jan 11 14:51:34.746: INFO: Created: latency-svc-48cwp Jan 11 14:51:34.784: INFO: Got endpoints: latency-svc-m4vcj [749.633788ms] Jan 11 14:51:34.800: INFO: Created: latency-svc-s4z9d Jan 11 14:51:34.833: INFO: Got endpoints: latency-svc-dbp6t [748.804534ms] Jan 11 14:51:34.848: INFO: Created: latency-svc-xz7jc Jan 11 14:51:34.886: INFO: Got endpoints: latency-svc-wc6pn [750.492356ms] Jan 11 14:51:34.900: INFO: Created: latency-svc-qkqrg Jan 11 14:51:34.935: INFO: Got endpoints: latency-svc-qth6q [748.75858ms] Jan 11 14:51:34.946: INFO: Created: latency-svc-9tc98 Jan 11 14:51:34.983: INFO: Got endpoints: latency-svc-l2jtp [749.587934ms] Jan 11 14:51:34.999: INFO: Created: latency-svc-2lk77 Jan 11 14:51:35.035: INFO: Got endpoints: latency-svc-lrmsr [748.717952ms] Jan 11 14:51:35.051: INFO: Created: latency-svc-bx4v9 Jan 11 14:51:35.084: INFO: Got endpoints: latency-svc-t6q4n [750.411035ms] Jan 11 14:51:35.097: INFO: Created: latency-svc-z9dn5 Jan 11 14:51:35.134: INFO: Got endpoints: latency-svc-pb2bs [749.996774ms] Jan 11 14:51:35.151: INFO: Created: latency-svc-kffnt Jan 11 14:51:35.186: INFO: Got endpoints: latency-svc-fq7s5 [752.678613ms] Jan 11 14:51:35.202: INFO: Created: latency-svc-chxz2 Jan 11 14:51:35.233: INFO: Got endpoints: latency-svc-9vtfg [748.579874ms] Jan 11 14:51:35.247: INFO: Created: latency-svc-rswxj Jan 11 14:51:35.286: INFO: Got endpoints: latency-svc-b2kcz [749.010211ms] Jan 11 14:51:35.303: INFO: Created: latency-svc-l5wks Jan 11 14:51:35.336: INFO: Got endpoints: latency-svc-c6frh [748.921944ms] Jan 11 14:51:35.352: INFO: Created: latency-svc-7llnl Jan 11 14:51:35.383: INFO: Got endpoints: latency-svc-fq94w [748.949667ms] Jan 11 14:51:35.399: INFO: Created: latency-svc-8l7vp Jan 11 14:51:35.439: INFO: Got endpoints: latency-svc-hkrt5 [755.272549ms] Jan 11 14:51:35.471: INFO: Created: latency-svc-qbmxd Jan 11 14:51:35.493: INFO: Got endpoints: latency-svc-48cwp [759.207847ms] Jan 11 14:51:35.540: INFO: Got endpoints: latency-svc-s4z9d [756.418758ms] Jan 11 14:51:35.587: INFO: Created: latency-svc-8xxmc Jan 11 14:51:35.609: INFO: Got endpoints: latency-svc-xz7jc [776.075875ms] Jan 11 14:51:35.620: INFO: Created: latency-svc-47r9b Jan 11 14:51:35.634: INFO: Created: latency-svc-98kzc Jan 11 14:51:35.636: INFO: Got endpoints: latency-svc-qkqrg [750.395448ms] Jan 11 14:51:35.655: INFO: Created: latency-svc-qxljw Jan 11 14:51:35.691: INFO: Got endpoints: latency-svc-9tc98 [756.10723ms] Jan 11 14:51:35.710: INFO: Created: latency-svc-dzcf2 Jan 11 14:51:35.734: INFO: Got endpoints: latency-svc-2lk77 [749.979387ms] Jan 11 14:51:35.743: INFO: Created: latency-svc-6rt9l Jan 11 14:51:35.784: INFO: Got endpoints: latency-svc-bx4v9 [749.046107ms] Jan 11 14:51:35.796: INFO: Created: latency-svc-c6l95 Jan 11 14:51:35.835: INFO: Got endpoints: latency-svc-z9dn5 [751.165318ms] Jan 11 14:51:35.845: INFO: Created: latency-svc-w4ngp Jan 11 14:51:35.886: INFO: Got endpoints: latency-svc-kffnt [752.115402ms] Jan 11 14:51:35.895: INFO: Created: latency-svc-vvbrs Jan 11 14:51:35.936: INFO: Got endpoints: latency-svc-chxz2 [749.471818ms] Jan 11 14:51:35.945: INFO: Created: latency-svc-m9r77 Jan 11 14:51:35.985: INFO: Got endpoints: latency-svc-rswxj [751.820194ms] Jan 11 14:51:35.999: INFO: Created: latency-svc-f46zd Jan 11 14:51:36.135: INFO: Got endpoints: latency-svc-l5wks [849.111939ms] Jan 11 14:51:36.148: INFO: Created: latency-svc-82m2x Jan 11 14:51:36.188: INFO: Got endpoints: latency-svc-7llnl [851.25078ms] Jan 11 14:51:36.198: INFO: Created: latency-svc-84q7c Jan 11 14:51:36.236: INFO: Got endpoints: latency-svc-8l7vp [852.712938ms] Jan 11 14:51:36.247: INFO: Created: latency-svc-g4xr5 Jan 11 14:51:36.283: INFO: Got endpoints: latency-svc-qbmxd [844.230575ms] Jan 11 14:51:36.297: INFO: Created: latency-svc-4c4dx Jan 11 14:51:36.335: INFO: Got endpoints: latency-svc-8xxmc [842.068163ms] Jan 11 14:51:36.345: INFO: Created: latency-svc-z7mx5 Jan 11 14:51:36.385: INFO: Got endpoints: latency-svc-47r9b [844.57125ms] Jan 11 14:51:36.395: INFO: Created: latency-svc-xkvn7 Jan 11 14:51:36.437: INFO: Got endpoints: latency-svc-98kzc [827.620193ms] Jan 11 14:51:36.452: INFO: Created: latency-svc-nlhm6 Jan 11 14:51:36.488: INFO: Got endpoints: latency-svc-qxljw [851.944938ms] Jan 11 14:51:36.501: INFO: Created: latency-svc-v96hd Jan 11 14:51:36.534: INFO: Got endpoints: latency-svc-dzcf2 [842.877409ms] Jan 11 14:51:36.551: INFO: Created: latency-svc-h2fms Jan 11 14:51:36.585: INFO: Got endpoints: latency-svc-6rt9l [851.455951ms] Jan 11 14:51:36.595: INFO: Created: latency-svc-svvxt Jan 11 14:51:36.634: INFO: Got endpoints: latency-svc-c6l95 [848.755632ms] Jan 11 14:51:36.647: INFO: Created: latency-svc-588wq Jan 11 14:51:36.683: INFO: Got endpoints: latency-svc-w4ngp [848.164503ms] Jan 11 14:51:36.694: INFO: Created: latency-svc-z4gms Jan 11 14:51:36.734: INFO: Got endpoints: latency-svc-vvbrs [848.189104ms] Jan 11 14:51:36.745: INFO: Created: latency-svc-kt5jq Jan 11 14:51:36.784: INFO: Got endpoints: latency-svc-m9r77 [847.870734ms] Jan 11 14:51:36.799: INFO: Created: latency-svc-ftsgd Jan 11 14:51:36.834: INFO: Got endpoints: latency-svc-f46zd [847.553428ms] Jan 11 14:51:36.844: INFO: Created: latency-svc-jmn7l Jan 11 14:51:36.884: INFO: Got endpoints: latency-svc-82m2x [748.834474ms] Jan 11 14:51:36.894: INFO: Created: latency-svc-hc9tp Jan 11 14:51:36.933: INFO: Got endpoints: latency-svc-84q7c [745.520071ms] Jan 11 14:51:36.944: INFO: Created: latency-svc-8p6ls Jan 11 14:51:36.987: INFO: Got endpoints: latency-svc-g4xr5 [750.574912ms] Jan 11 14:51:36.999: INFO: Created: latency-svc-9s4kk Jan 11 14:51:37.035: INFO: Got endpoints: latency-svc-4c4dx [751.152447ms] Jan 11 14:51:37.043: INFO: Created: latency-svc-sjmdp Jan 11 14:51:37.085: INFO: Got endpoints: latency-svc-z7mx5 [750.023117ms] Jan 11 14:51:37.095: INFO: Created: latency-svc-z7xjf Jan 11 14:51:37.135: INFO: Got endpoints: latency-svc-xkvn7 [749.75208ms] Jan 11 14:51:37.144: INFO: Created: latency-svc-pdzgh Jan 11 14:51:37.185: INFO: Got endpoints: latency-svc-nlhm6 [747.365468ms] Jan 11 14:51:37.194: INFO: Created: latency-svc-zqpms Jan 11 14:51:37.233: INFO: Got endpoints: latency-svc-v96hd [745.156673ms] Jan 11 14:51:37.242: INFO: Created: latency-svc-rnrqt Jan 11 14:51:37.287: INFO: Got endpoints: latency-svc-h2fms [752.364595ms] Jan 11 14:51:37.297: INFO: Created: latency-svc-kwkpg Jan 11 14:51:37.335: INFO: Got endpoints: latency-svc-svvxt [750.188538ms] Jan 11 14:51:37.348: INFO: Created: latency-svc-t44db Jan 11 14:51:37.384: INFO: Got endpoints: latency-svc-588wq [750.003188ms] Jan 11 14:51:37.396: INFO: Created: latency-svc-dl4rj Jan 11 14:51:37.434: INFO: Got endpoints: latency-svc-z4gms [750.261814ms] Jan 11 14:51:37.448: INFO: Created: latency-svc-lkfsd Jan 11 14:51:37.493: INFO: Got endpoints: latency-svc-kt5jq [759.11368ms] Jan 11 14:51:37.510: INFO: Created: latency-svc-bnk42 Jan 11 14:51:37.542: INFO: Got endpoints: latency-svc-ftsgd [758.228357ms] Jan 11 14:51:37.567: INFO: Created: latency-svc-sljg4 Jan 11 14:51:37.584: INFO: Got endpoints: latency-svc-jmn7l [750.223678ms] Jan 11 14:51:37.602: INFO: Created: latency-svc-f7w7b Jan 11 14:51:37.634: INFO: Got endpoints: latency-svc-hc9tp [749.660377ms] Jan 11 14:51:37.643: INFO: Created: latency-svc-748lj Jan 11 14:51:37.684: INFO: Got endpoints: latency-svc-8p6ls [750.716448ms] Jan 11 14:51:37.696: INFO: Created: latency-svc-94gdg Jan 11 14:51:37.735: INFO: Got endpoints: latency-svc-9s4kk [748.528111ms] Jan 11 14:51:37.748: INFO: Created: latency-svc-j9jt5 Jan 11 14:51:37.786: INFO: Got endpoints: latency-svc-sjmdp [750.737469ms] Jan 11 14:51:37.796: INFO: Created: latency-svc-7l66m Jan 11 14:51:37.834: INFO: Got endpoints: latency-svc-z7xjf [748.506891ms] Jan 11 14:51:37.847: INFO: Created: latency-svc-zqsvw Jan 11 14:51:37.884: INFO: Got endpoints: latency-svc-pdzgh [749.541631ms] Jan 11 14:51:37.894: INFO: Created: latency-svc-75bzk Jan 11 14:51:37.934: INFO: Got endpoints: latency-svc-zqpms [748.935425ms] Jan 11 14:51:37.944: INFO: Created: latency-svc-99lmp Jan 11 14:51:37.986: INFO: Got endpoints: latency-svc-rnrqt [752.273797ms] Jan 11 14:51:37.996: INFO: Created: latency-svc-s7mbk Jan 11 14:51:38.039: INFO: Got endpoints: latency-svc-kwkpg [751.782873ms] Jan 11 14:51:38.049: INFO: Created: latency-svc-sg6mv Jan 11 14:51:38.085: INFO: Got endpoints: latency-svc-t44db [749.225195ms] Jan 11 14:51:38.095: INFO: Created: latency-svc-gf8mj Jan 11 14:51:38.134: INFO: Got endpoints: latency-svc-dl4rj [750.103585ms] Jan 11 14:51:38.145: INFO: Created: latency-svc-sbhdd Jan 11 14:51:38.184: INFO: Got endpoints: latency-svc-lkfsd [750.159477ms] Jan 11 14:51:38.195: INFO: Created: latency-svc-rpmwq Jan 11 14:51:38.238: INFO: Got endpoints: latency-svc-bnk42 [745.004512ms] Jan 11 14:51:38.255: INFO: Created: latency-svc-jtndl Jan 11 14:51:38.287: INFO: Got endpoints: latency-svc-sljg4 [745.034309ms] Jan 11 14:51:38.302: INFO: Created: latency-svc-5t5fl Jan 11 14:51:38.334: INFO: Got endpoints: latency-svc-f7w7b [748.668658ms] Jan 11 14:51:38.352: INFO: Created: latency-svc-flkz5 Jan 11 14:51:38.385: INFO: Got endpoints: latency-svc-748lj [750.871881ms] Jan 11 14:51:38.394: INFO: Created: latency-svc-hz6cd Jan 11 14:51:38.435: INFO: Got endpoints: latency-svc-94gdg [750.463427ms] Jan 11 14:51:38.457: INFO: Created: latency-svc-vrv66 Jan 11 14:51:38.509: INFO: Got endpoints: latency-svc-j9jt5 [773.940563ms] Jan 11 14:51:38.561: INFO: Created: latency-svc-jqjwj Jan 11 14:51:38.588: INFO: Got endpoints: latency-svc-7l66m [802.259921ms] Jan 11 14:51:38.607: INFO: Created: latency-svc-bg2vx Jan 11 14:51:38.637: INFO: Got endpoints: latency-svc-zqsvw [802.557469ms] Jan 11 14:51:38.649: INFO: Created: latency-svc-cp62z Jan 11 14:51:38.685: INFO: Got endpoints: latency-svc-75bzk [800.837453ms] Jan 11 14:51:38.697: INFO: Created: latency-svc-zrbwt Jan 11 14:51:38.737: INFO: Got endpoints: latency-svc-99lmp [803.753138ms] Jan 11 14:51:38.749: INFO: Created: latency-svc-8dsj7 Jan 11 14:51:38.783: INFO: Got endpoints: latency-svc-s7mbk [797.644772ms] Jan 11 14:51:38.794: INFO: Created: latency-svc-2wckg Jan 11 14:51:38.835: INFO: Got endpoints: latency-svc-sg6mv [796.701163ms] Jan 11 14:51:38.845: INFO: Created: latency-svc-hr8td Jan 11 14:51:38.883: INFO: Got endpoints: latency-svc-gf8mj [798.640418ms] Jan 11 14:51:38.893: INFO: Created: latency-svc-6xr4p Jan 11 14:51:38.934: INFO: Got endpoints: latency-svc-sbhdd [799.660493ms] Jan 11 14:51:38.944: INFO: Created: latency-svc-hj9m8 Jan 11 14:51:38.986: INFO: Got endpoints: latency-svc-rpmwq [802.339728ms] Jan 11 14:51:38.997: INFO: Created: latency-svc-gx6xx Jan 11 14:51:39.034: INFO: Got endpoints: latency-svc-jtndl [795.6398ms] Jan 11 14:51:39.044: INFO: Created: latency-svc-6tzxk Jan 11 14:51:39.087: INFO: Got endpoints: latency-svc-5t5fl [799.695478ms] Jan 11 14:51:39.097: INFO: Created: latency-svc-jvd4z Jan 11 14:51:39.134: INFO: Got endpoints: latency-svc-flkz5 [799.354345ms] Jan 11 14:51:39.178: INFO: Created: latency-svc-7bxdc Jan 11 14:51:39.191: INFO: Got endpoints: latency-svc-hz6cd [806.658518ms] Jan 11 14:51:39.203: INFO: Created: latency-svc-qg2pf Jan 11 14:51:39.234: INFO: Got endpoints: latency-svc-vrv66 [798.557101ms] Jan 11 14:51:39.247: INFO: Created: latency-svc-74nq4 Jan 11 14:51:39.284: INFO: Got endpoints: latency-svc-jqjwj [774.180514ms] Jan 11 14:51:39.299: INFO: Created: latency-svc-qf278 Jan 11 14:51:39.337: INFO: Got endpoints: latency-svc-bg2vx [749.176738ms] Jan 11 14:51:39.347: INFO: Created: latency-svc-7zpc5 Jan 11 14:51:39.385: INFO: Got endpoints: latency-svc-cp62z [748.335476ms] Jan 11 14:51:39.396: INFO: Created: latency-svc-7vfm8 Jan 11 14:51:39.434: INFO: Got endpoints: latency-svc-zrbwt [748.169077ms] Jan 11 14:51:39.448: INFO: Created: latency-svc-65dn5 Jan 11 14:51:39.484: INFO: Got endpoints: latency-svc-8dsj7 [746.532707ms] Jan 11 14:51:39.500: INFO: Created: latency-svc-gcqkk Jan 11 14:51:39.539: INFO: Got endpoints: latency-svc-2wckg [755.906502ms] Jan 11 14:51:39.556: INFO: Created: latency-svc-cblx2 Jan 11 14:51:39.585: INFO: Got endpoints: latency-svc-hr8td [749.583575ms] Jan 11 14:51:39.602: INFO: Created: latency-svc-8zm6f Jan 11 14:51:39.633: INFO: Got endpoints: latency-svc-6xr4p [749.702026ms] Jan 11 14:51:39.643: INFO: Created: latency-svc-6bjzn Jan 11 14:51:39.685: INFO: Got endpoints: latency-svc-hj9m8 [751.104766ms] Jan 11 14:51:39.700: INFO: Created: latency-svc-28dt5 Jan 11 14:51:39.735: INFO: Got endpoints: latency-svc-gx6xx [748.326835ms] Jan 11 14:51:39.744: INFO: Created: latency-svc-lltxs Jan 11 14:51:39.784: INFO: Got endpoints: latency-svc-6tzxk [749.905348ms] Jan 11 14:51:39.795: INFO: Created: latency-svc-6xj2l Jan 11 14:51:39.834: INFO: Got endpoints: latency-svc-jvd4z [746.789767ms] Jan 11 14:51:39.845: INFO: Created: latency-svc-z98kr Jan 11 14:51:39.883: INFO: Got endpoints: latency-svc-7bxdc [749.138378ms] Jan 11 14:51:39.894: INFO: Created: latency-svc-tlxlh Jan 11 14:51:39.937: INFO: Got endpoints: latency-svc-qg2pf [745.34848ms] Jan 11 14:51:39.947: INFO: Created: latency-svc-mx4mw Jan 11 14:51:40.034: INFO: Got endpoints: latency-svc-74nq4 [800.75379ms] Jan 11 14:51:40.047: INFO: Created: latency-svc-crwhp Jan 11 14:51:40.086: INFO: Got endpoints: latency-svc-qf278 [801.837892ms] Jan 11 14:51:40.096: INFO: Created: latency-svc-6pttt Jan 11 14:51:40.133: INFO: Got endpoints: latency-svc-7zpc5 [796.164539ms] Jan 11 14:51:40.147: INFO: Created: latency-svc-pqffp Jan 11 14:51:40.183: INFO: Got endpoints: latency-svc-7vfm8 [798.253034ms] Jan 11 14:51:40.193: INFO: Created: latency-svc-9rszm Jan 11 14:51:40.234: INFO: Got endpoints: latency-svc-65dn5 [800.315806ms] Jan 11 14:51:40.244: INFO: Created: latency-svc-mrsvp Jan 11 14:51:40.286: INFO: Got endpoints: latency-svc-gcqkk [801.745314ms] Jan 11 14:51:40.296: INFO: Created: latency-svc-mqpw8 Jan 11 14:51:40.333: INFO: Got endpoints: latency-svc-cblx2 [793.952413ms] Jan 11 14:51:40.343: INFO: Created: latency-svc-p6xkp Jan 11 14:51:40.383: INFO: Got endpoints: latency-svc-8zm6f [798.428308ms] Jan 11 14:51:40.396: INFO: Created: latency-svc-79zst Jan 11 14:51:40.440: INFO: Got endpoints: latency-svc-6bjzn [806.455531ms] Jan 11 14:51:40.454: INFO: Created: latency-svc-xbltz Jan 11 14:51:40.486: INFO: Got endpoints: latency-svc-28dt5 [800.672542ms] Jan 11 14:51:40.537: INFO: Got endpoints: latency-svc-lltxs [801.794334ms] Jan 11 14:51:40.584: INFO: Got endpoints: latency-svc-6xj2l [799.861277ms] Jan 11 14:51:40.633: INFO: Got endpoints: latency-svc-z98kr [799.000531ms] Jan 11 14:51:40.683: INFO: Got endpoints: latency-svc-tlxlh [798.738269ms] Jan 11 14:51:40.734: INFO: Got endpoints: latency-svc-mx4mw [797.51404ms] Jan 11 14:51:40.783: INFO: Got endpoints: latency-svc-crwhp [747.866008ms] Jan 11 14:51:40.835: INFO: Got endpoints: latency-svc-6pttt [749.742809ms] Jan 11 14:51:40.883: INFO: Got endpoints: latency-svc-pqffp [749.954037ms] Jan 11 14:51:40.933: INFO: Got endpoints: latency-svc-9rszm [749.865552ms] Jan 11 14:51:40.984: INFO: Got endpoints: latency-svc-mrsvp [750.311939ms] Jan 11 14:51:41.037: INFO: Got endpoints: latency-svc-mqpw8 [750.576727ms] Jan 11 14:51:41.092: INFO: Got endpoints: latency-svc-p6xkp [758.417208ms] Jan 11 14:51:41.134: INFO: Got endpoints: latency-svc-79zst [750.136029ms] Jan 11 14:51:41.189: INFO: Got endpoints: latency-svc-xbltz [749.653678ms] Jan 11 14:51:41.189: INFO: Latencies: [20.229089ms 27.305905ms 36.905944ms 63.509822ms 72.149135ms 103.085421ms 114.189932ms 133.227379ms 159.671398ms 160.648824ms 161.654801ms 166.106328ms 166.520218ms 172.550146ms 180.384988ms 182.802416ms 187.655742ms 192.218454ms 192.874377ms 193.630731ms 194.820886ms 198.519406ms 198.660074ms 199.05593ms 201.115946ms 202.457719ms 204.331305ms 205.294003ms 206.200958ms 208.647679ms 210.128121ms 213.905665ms 222.516714ms 223.609146ms 225.079662ms 226.343592ms 236.725325ms 236.869782ms 241.534666ms 243.315227ms 253.789864ms 275.214381ms 309.88746ms 337.350625ms 340.096254ms 386.893755ms 424.279889ms 467.652213ms 503.741725ms 552.629306ms 590.001061ms 627.621902ms 672.62542ms 709.989978ms 739.769631ms 743.2119ms 743.420508ms 745.004512ms 745.034309ms 745.156673ms 745.34848ms 745.520071ms 745.959687ms 746.532707ms 746.789767ms 746.930204ms 747.122434ms 747.365468ms 747.866008ms 748.082986ms 748.162075ms 748.169077ms 748.326835ms 748.335476ms 748.506891ms 748.528111ms 748.579874ms 748.668658ms 748.717952ms 748.75858ms 748.785061ms 748.804534ms 748.834474ms 748.921944ms 748.935425ms 748.949667ms 748.966091ms 749.010211ms 749.046107ms 749.076764ms 749.138378ms 749.176738ms 749.225195ms 749.225528ms 749.325057ms 749.471818ms 749.512826ms 749.541631ms 749.583575ms 749.587934ms 749.633788ms 749.653678ms 749.660377ms 749.702026ms 749.742809ms 749.75208ms 749.865552ms 749.87202ms 749.905348ms 749.954037ms 749.979387ms 749.996774ms 750.003188ms 750.023117ms 750.042204ms 750.062829ms 750.103585ms 750.136029ms 750.159477ms 750.188538ms 750.223678ms 750.261814ms 750.311939ms 750.395448ms 750.411035ms 750.463427ms 750.492356ms 750.574912ms 750.576727ms 750.716448ms 750.737469ms 750.871881ms 751.104766ms 751.152447ms 751.165318ms 751.330381ms 751.782873ms 751.820194ms 752.052061ms 752.115402ms 752.273797ms 752.364595ms 752.434442ms 752.678613ms 753.108472ms 755.272549ms 755.906502ms 756.10723ms 756.418758ms 758.228357ms 758.417208ms 759.11368ms 759.207847ms 773.940563ms 774.180514ms 776.075875ms 793.952413ms 795.6398ms 796.164539ms 796.701163ms 797.51404ms 797.644772ms 798.253034ms 798.428308ms 798.557101ms 798.640418ms 798.738269ms 799.000531ms 799.354345ms 799.660493ms 799.695478ms 799.861277ms 800.315806ms 800.672542ms 800.75379ms 800.837453ms 801.745314ms 801.794334ms 801.837892ms 802.259921ms 802.339728ms 802.557469ms 803.753138ms 806.455531ms 806.658518ms 827.620193ms 842.068163ms 842.877409ms 844.230575ms 844.57125ms 847.553428ms 847.870734ms 848.164503ms 848.189104ms 848.755632ms 849.111939ms 851.25078ms 851.455951ms 851.944938ms 852.712938ms] Jan 11 14:51:41.190: INFO: 50 %ile: 749.633788ms Jan 11 14:51:41.190: INFO: 90 %ile: 802.339728ms Jan 11 14:51:41.190: INFO: 99 %ile: 851.944938ms Jan 11 14:51:41.190: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:41.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-1149" for this suite. �[32m•�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":20,"skipped":470,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:37.751: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-861 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-861 I0111 14:51:37.801777 14 runners.go:190] Created replication controller with name: externalname-service, namespace: services-861, replica count: 2 I0111 14:51:40.852104 14 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:51:40.852: INFO: Creating new exec pod Jan 11 14:51:43.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-861 exec execpod4k88n -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 11 14:51:44.046: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 11 14:51:44.047: INFO: stdout: "" Jan 11 14:51:44.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-861 exec execpod4k88n -- /bin/sh -x -c nc -zv -t -w 2 10.139.153.126 80' Jan 11 14:51:44.218: INFO: stderr: "+ nc -zv -t -w 2 10.139.153.126 80\nConnection to 10.139.153.126 80 port [tcp/http] succeeded!\n" Jan 11 14:51:44.218: INFO: stdout: "" Jan 11 14:51:44.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-861 exec execpod4k88n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31544' Jan 11 14:51:44.357: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.6 31544\nConnection to 172.18.0.6 31544 port [tcp/31544] succeeded!\n" Jan 11 14:51:44.357: INFO: stdout: "" Jan 11 14:51:44.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-861 exec execpod4k88n -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 31544' Jan 11 14:51:44.510: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 31544\nConnection to 172.18.0.7 31544 port [tcp/31544] succeeded!\n" Jan 11 14:51:44.510: INFO: stdout: "" Jan 11 14:51:44.510: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:44.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-861" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":48,"skipped":1115,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:44.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:51:44.636: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ad8f671-a146-4fca-bb98-f2dc99d0356a" in namespace "projected-6829" to be "Succeeded or Failed" Jan 11 14:51:44.639: INFO: Pod "downwardapi-volume-8ad8f671-a146-4fca-bb98-f2dc99d0356a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.694783ms Jan 11 14:51:46.642: INFO: Pod "downwardapi-volume-8ad8f671-a146-4fca-bb98-f2dc99d0356a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006063828s �[1mSTEP�[0m: Saw pod success Jan 11 14:51:46.642: INFO: Pod "downwardapi-volume-8ad8f671-a146-4fca-bb98-f2dc99d0356a" satisfied condition "Succeeded or Failed" Jan 11 14:51:46.647: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-rjxfz pod downwardapi-volume-8ad8f671-a146-4fca-bb98-f2dc99d0356a container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:51:46.666: INFO: Waiting for pod downwardapi-volume-8ad8f671-a146-4fca-bb98-f2dc99d0356a to disappear Jan 11 14:51:46.670: INFO: Pod downwardapi-volume-8ad8f671-a146-4fca-bb98-f2dc99d0356a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:46.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6829" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1133,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:46.686: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename tables �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:51:46.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "tables-6377" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":50,"skipped":1137,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:46.759: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:51:48.165: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:51:51.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:52.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:53.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:54.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:55.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:56.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:57.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:58.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:51:59.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:52:00.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Jan 11 14:52:01.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:01.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4630" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4630-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":51,"skipped":1153,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:41.233: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-7244 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-7244 Jan 11 14:51:41.298: INFO: Found 0 stateful pods, waiting for 1 Jan 11 14:51:51.309: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 11 14:51:51.323: INFO: Deleting all statefulset in ns statefulset-7244 Jan 11 14:51:51.326: INFO: Scaling statefulset ss to 0 Jan 11 14:52:01.381: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 14:52:01.389: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:01.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-7244" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":21,"skipped":475,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:01.457: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:52:01.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-abadaa9b-2b13-45c4-aa3b-6cdf6ff3bd9f" in namespace "downward-api-6181" to be "Succeeded or Failed" Jan 11 14:52:01.521: INFO: Pod "downwardapi-volume-abadaa9b-2b13-45c4-aa3b-6cdf6ff3bd9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467033ms Jan 11 14:52:03.524: INFO: Pod "downwardapi-volume-abadaa9b-2b13-45c4-aa3b-6cdf6ff3bd9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008174622s �[1mSTEP�[0m: Saw pod success Jan 11 14:52:03.524: INFO: Pod "downwardapi-volume-abadaa9b-2b13-45c4-aa3b-6cdf6ff3bd9f" satisfied condition "Succeeded or Failed" Jan 11 14:52:03.527: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod downwardapi-volume-abadaa9b-2b13-45c4-aa3b-6cdf6ff3bd9f container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:52:03.540: INFO: Waiting for pod downwardapi-volume-abadaa9b-2b13-45c4-aa3b-6cdf6ff3bd9f to disappear Jan 11 14:52:03.543: INFO: Pod downwardapi-volume-abadaa9b-2b13-45c4-aa3b-6cdf6ff3bd9f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:03.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6181" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":475,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:03.574: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:03.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-2673" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":490,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:01.425: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:52:01.488: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-7bbaf2b3-4d94-4c7f-8621-0a9aca0b7036" in namespace "security-context-test-6649" to be "Succeeded or Failed" Jan 11 14:52:01.499: INFO: Pod "alpine-nnp-false-7bbaf2b3-4d94-4c7f-8621-0a9aca0b7036": Phase="Pending", Reason="", readiness=false. Elapsed: 11.564145ms Jan 11 14:52:03.503: INFO: Pod "alpine-nnp-false-7bbaf2b3-4d94-4c7f-8621-0a9aca0b7036": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015165959s Jan 11 14:52:05.507: INFO: Pod "alpine-nnp-false-7bbaf2b3-4d94-4c7f-8621-0a9aca0b7036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018805344s Jan 11 14:52:05.507: INFO: Pod "alpine-nnp-false-7bbaf2b3-4d94-4c7f-8621-0a9aca0b7036" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:05.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-6649" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":1186,"failed":1,"failures":["[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:03.628: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:14.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-5269" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":24,"skipped":493,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:14.757: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 11 14:52:15.808: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:15.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-6909" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":533,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:15.877: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Jan 11 14:52:15.917: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:15.917: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:15.922: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:15.922: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:15.935: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:15.935: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:15.951: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:15.951: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 11 14:52:16.709: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 11 14:52:16.709: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 11 14:52:17.421: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Jan 11 14:52:17.433: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.435: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 0 Jan 11 14:52:17.436: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:17.437: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:17.437: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.437: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.437: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.437: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.441: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.441: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.455: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.455: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 2 Jan 11 14:52:17.469: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Jan 11 14:52:17.474: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Jan 11 14:52:17.485: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Jan 11 14:52:17.492: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:patched test-deployment-static:true] Jan 11 14:52:17.492: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 14:52:17.506: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 14:52:17.523: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 14:52:17.540: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 14:52:17.546: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 14:52:17.563: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 11 14:52:17.571: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 Jan 11 14:52:18.743: INFO: observed Deployment test-deployment in namespace deployment-7774 with ReadyReplicas 1 �[1mSTEP�[0m: deleting the Deployment Jan 11 14:52:18.758: INFO: observed event type MODIFIED Jan 11 14:52:18.758: INFO: observed event type MODIFIED Jan 11 14:52:18.758: INFO: observed event type MODIFIED Jan 11 14:52:18.758: INFO: observed event type MODIFIED Jan 11 14:52:18.758: INFO: observed event type MODIFIED Jan 11 14:52:18.759: INFO: observed event type MODIFIED Jan 11 14:52:18.759: INFO: observed event type MODIFIED Jan 11 14:52:18.759: INFO: observed event type MODIFIED Jan 11 14:52:18.759: INFO: observed event type MODIFIED Jan 11 14:52:18.759: INFO: observed event type MODIFIED Jan 11 14:52:18.759: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 14:52:18.763: INFO: Log out all the ReplicaSets if there is no deployment created Jan 11 14:52:18.766: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-7774 adc0e6e7-cf6f-410f-b2f0-d37681a8482f 11774 3 2023-01-11 14:52:17 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 77f422fb-9c59-4a11-b1b6-8fe60550d5af 0xc002e2fde7 0xc002e2fde8}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:52:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77f422fb-9c59-4a11-b1b6-8fe60550d5af\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e2fe50 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:52:18.770: INFO: pod: "test-deployment-768947d6f5-4jp5r": &Pod{ObjectMeta:{test-deployment-768947d6f5-4jp5r test-deployment-768947d6f5- deployment-7774 f9b84dc8-4971-4479-8846-a509873749c8 11777 0 2023-01-11 14:52:18 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 adc0e6e7-cf6f-410f-b2f0-d37681a8482f 0xc00407c237 0xc00407c238}] [] [{kube-controller-manager Update v1 2023-01-11 14:52:18 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adc0e6e7-cf6f-410f-b2f0-d37681a8482f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:52:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pzcxb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pzcxb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pzcxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-11 14:52:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:52:18.770: INFO: pod: "test-deployment-768947d6f5-8zngr": &Pod{ObjectMeta:{test-deployment-768947d6f5-8zngr test-deployment-768947d6f5- deployment-7774 0f5809fa-d228-4baf-8c31-2636483b2913 11756 0 2023-01-11 14:52:17 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 adc0e6e7-cf6f-410f-b2f0-d37681a8482f 0xc00407c3b7 0xc00407c3b8}] [] [{kube-controller-manager Update v1 2023-01-11 14:52:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"adc0e6e7-cf6f-410f-b2f0-d37681a8482f\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:52:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.65\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pzcxb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pzcxb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pzcxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.65,StartTime:2023-01-11 14:52:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:52:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a39be7ee63961c50f8aa54bfc9cd99e232da105c23af093474dd0dd0ab325790,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.65,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 11 14:52:18.770: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-7774 d70862c9-33f9-488d-b21e-e79f85fee703 11775 4 2023-01-11 14:52:17 +0000 UTC <nil> <nil> map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 77f422fb-9c59-4a11-b1b6-8fe60550d5af 0xc002e2feb7 0xc002e2feb8}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:52:18 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77f422fb-9c59-4a11-b1b6-8fe60550d5af\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002e2ff38 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:52:18.773: INFO: ReplicaSet "test-deployment-8b6954bfb": &ReplicaSet{ObjectMeta:{test-deployment-8b6954bfb deployment-7774 d2726b5e-4dff-4d3c-8092-1bb184caf071 11714 2 2023-01-11 14:52:15 +0000 UTC <nil> <nil> map[pod-template-hash:8b6954bfb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 77f422fb-9c59-4a11-b1b6-8fe60550d5af 0xc002e2ff97 0xc002e2ff98}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:52:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"77f422fb-9c59-4a11-b1b6-8fe60550d5af\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8b6954bfb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00407c000 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:52:18.776: INFO: pod: "test-deployment-8b6954bfb-c6nhx": &Pod{ObjectMeta:{test-deployment-8b6954bfb-c6nhx test-deployment-8b6954bfb- deployment-7774 84d50b55-3bbd-402b-935b-74b9ed4c672e 11679 0 2023-01-11 14:52:15 +0000 UTC <nil> <nil> map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-8b6954bfb d2726b5e-4dff-4d3c-8092-1bb184caf071 0xc003551357 0xc003551358}] [] [{kube-controller-manager Update v1 2023-01-11 14:52:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d2726b5e-4dff-4d3c-8092-1bb184caf071\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:52:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.64\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pzcxb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pzcxb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pzcxb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.64,StartTime:2023-01-11 14:52:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:52:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://f76d893284d97b7f531e0c8a459e75669f67a463cf7cbb0d9fc47407cb0650ac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.64,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:18.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7774" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":26,"skipped":568,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:18.799: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-45cd4c8c-a46d-4871-845c-f243e0e416c9 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:20.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8100" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":574,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:20.905: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-9f87346f-ef7f-4c3c-a533-1f53708afc84 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:52:20.940: INFO: Waiting up to 5m0s for pod "pod-secrets-70dceb29-fe27-4f99-9245-ef6e3c33f0b4" in namespace "secrets-2042" to be "Succeeded or Failed" Jan 11 14:52:20.942: INFO: Pod "pod-secrets-70dceb29-fe27-4f99-9245-ef6e3c33f0b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094269ms Jan 11 14:52:22.945: INFO: Pod "pod-secrets-70dceb29-fe27-4f99-9245-ef6e3c33f0b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005394408s �[1mSTEP�[0m: Saw pod success Jan 11 14:52:22.945: INFO: Pod "pod-secrets-70dceb29-fe27-4f99-9245-ef6e3c33f0b4" satisfied condition "Succeeded or Failed" Jan 11 14:52:22.948: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod pod-secrets-70dceb29-fe27-4f99-9245-ef6e3c33f0b4 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:52:22.960: INFO: Waiting for pod pod-secrets-70dceb29-fe27-4f99-9245-ef6e3c33f0b4 to disappear Jan 11 14:52:22.962: INFO: Pod pod-secrets-70dceb29-fe27-4f99-9245-ef6e3c33f0b4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:22.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2042" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":589,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:22.984: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-6846 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-6846 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-6846 I0111 14:52:23.034678 16 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-6846, replica count: 3 I0111 14:52:26.093484 16 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:52:26.100: INFO: Creating new exec pod Jan 11 14:52:29.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6846 exec execpod-affinity2phkc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 11 14:52:29.319: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 11 14:52:29.319: INFO: stdout: "" Jan 11 14:52:29.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6846 exec execpod-affinity2phkc -- /bin/sh -x -c nc -zv -t -w 2 10.132.28.156 80' Jan 11 14:52:29.500: INFO: stderr: "+ nc -zv -t -w 2 10.132.28.156 80\nConnection to 10.132.28.156 80 port [tcp/http] succeeded!\n" Jan 11 14:52:29.500: INFO: stdout: "" Jan 11 14:52:29.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6846 exec execpod-affinity2phkc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.132.28.156:80/ ; done' Jan 11 14:52:29.764: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.132.28.156:80/\n" Jan 11 14:52:29.764: INFO: stdout: "\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4\naffinity-clusterip-trzv4" Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Received response from host: affinity-clusterip-trzv4 Jan 11 14:52:29.764: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-6846, will wait for the garbage collector to delete the pods Jan 11 14:52:29.836: INFO: Deleting ReplicationController affinity-clusterip took: 5.719445ms Jan 11 14:52:29.936: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.263576ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:40.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6846" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":598,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:40.418: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:52:40.461: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 11 14:52:40.480: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 11 14:52:45.483: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 11 14:52:45.483: INFO: Creating deployment "test-rolling-update-deployment" Jan 11 14:52:45.488: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 11 14:52:45.494: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 11 14:52:47.501: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 11 14:52:47.504: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 11 14:52:47.513: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-4616 ffabd9ef-43ef-4081-82de-9398e6984a42 12140 1 2023-01-11 14:52:45 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-11 14:52:45 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-11 14:52:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003c21508 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-11 14:52:45 +0000 UTC,LastTransitionTime:2023-01-11 14:52:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-6b6bf9df46" has successfully progressed.,LastUpdateTime:2023-01-11 14:52:46 +0000 UTC,LastTransitionTime:2023-01-11 14:52:45 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 11 14:52:47.516: INFO: New ReplicaSet "test-rolling-update-deployment-6b6bf9df46" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46 deployment-4616 ce800bbd-e6ef-4c16-b021-ff560cfb3ac4 12129 1 2023-01-11 14:52:45 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ffabd9ef-43ef-4081-82de-9398e6984a42 0xc002b80547 0xc002b80548}] [] [{kube-controller-manager Update apps/v1 2023-01-11 14:52:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffabd9ef-43ef-4081-82de-9398e6984a42\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 6b6bf9df46,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b806e8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:52:47.516: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 11 14:52:47.517: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-4616 63673519-5310-490c-9cd3-551d9444c5eb 12139 2 2023-01-11 14:52:40 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ffabd9ef-43ef-4081-82de-9398e6984a42 0xc003c21ff7 0xc003c21ff8}] [] [{e2e.test Update apps/v1 2023-01-11 14:52:40 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-11 14:52:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ffabd9ef-43ef-4081-82de-9398e6984a42\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b80288 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 11 14:52:47.519: INFO: Pod "test-rolling-update-deployment-6b6bf9df46-glt6d" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-6b6bf9df46-glt6d test-rolling-update-deployment-6b6bf9df46- deployment-4616 fbcfd5e8-3db7-4aa0-9c99-3849398bb0a0 12128 0 2023-01-11 14:52:45 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:6b6bf9df46] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-6b6bf9df46 ce800bbd-e6ef-4c16-b021-ff560cfb3ac4 0xc002b81837 0xc002b81838}] [] [{kube-controller-manager Update v1 2023-01-11 14:52:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ce800bbd-e6ef-4c16-b021-ff560cfb3ac4\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-11 14:52:46 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.71\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jnpzr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jnpzr,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jnpzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-dctc5v-worker-2py7ys,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-11 14:52:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.71,StartTime:2023-01-11 14:52:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-11 14:52:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://3956533044fa3eceb2482ff7989f3ffd353f8127d56bfe437882cb91e5518c39,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.71,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:47.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-4616" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":30,"skipped":622,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:47.539: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:52:47.573: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-c64e2082-1949-4b59-9add-c098be1bf4af" in namespace "security-context-test-1819" to be "Succeeded or Failed" Jan 11 14:52:47.576: INFO: Pod "busybox-readonly-false-c64e2082-1949-4b59-9add-c098be1bf4af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.281945ms Jan 11 14:52:49.579: INFO: Pod "busybox-readonly-false-c64e2082-1949-4b59-9add-c098be1bf4af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005581811s Jan 11 14:52:49.579: INFO: Pod "busybox-readonly-false-c64e2082-1949-4b59-9add-c098be1bf4af" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:49.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-1819" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":630,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:49.603: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:49.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-243" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":32,"skipped":641,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:49.656: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-6d98d5c9-ed16-48ce-9632-1f19f47519b7 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 14:52:49.690: INFO: Waiting up to 5m0s for pod "pod-configmaps-aee43b68-413d-4db9-a576-107e98c549f5" in namespace "configmap-423" to be "Succeeded or Failed" Jan 11 14:52:49.693: INFO: Pod "pod-configmaps-aee43b68-413d-4db9-a576-107e98c549f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946217ms Jan 11 14:52:51.698: INFO: Pod "pod-configmaps-aee43b68-413d-4db9-a576-107e98c549f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007528504s �[1mSTEP�[0m: Saw pod success Jan 11 14:52:51.698: INFO: Pod "pod-configmaps-aee43b68-413d-4db9-a576-107e98c549f5" satisfied condition "Succeeded or Failed" Jan 11 14:52:51.700: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-configmaps-aee43b68-413d-4db9-a576-107e98c549f5 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:52:51.713: INFO: Waiting for pod pod-configmaps-aee43b68-413d-4db9-a576-107e98c549f5 to disappear Jan 11 14:52:51.715: INFO: Pod pod-configmaps-aee43b68-413d-4db9-a576-107e98c549f5 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:51.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-423" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":648,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:51.753: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-37b5321e-100c-4c94-b633-0be6ca3fb08e �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 11 14:52:51.793: INFO: Waiting up to 5m0s for pod "pod-secrets-4743db94-8e07-4d45-8222-99ebd07c7b68" in namespace "secrets-2914" to be "Succeeded or Failed" Jan 11 14:52:51.799: INFO: Pod "pod-secrets-4743db94-8e07-4d45-8222-99ebd07c7b68": Phase="Pending", Reason="", readiness=false. Elapsed: 5.670943ms Jan 11 14:52:53.802: INFO: Pod "pod-secrets-4743db94-8e07-4d45-8222-99ebd07c7b68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009324719s �[1mSTEP�[0m: Saw pod success Jan 11 14:52:53.802: INFO: Pod "pod-secrets-4743db94-8e07-4d45-8222-99ebd07c7b68" satisfied condition "Succeeded or Failed" Jan 11 14:52:53.806: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-secrets-4743db94-8e07-4d45-8222-99ebd07c7b68 container secret-env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:52:53.821: INFO: Waiting for pod pod-secrets-4743db94-8e07-4d45-8222-99ebd07c7b68 to disappear Jan 11 14:52:53.824: INFO: Pod pod-secrets-4743db94-8e07-4d45-8222-99ebd07c7b68 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:53.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2914" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":670,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:53.850: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:52:53.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5824" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":35,"skipped":685,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":699,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:49:02.835: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-5015 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-5015 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-5015 I0111 14:49:27.703371 19 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-5015, replica count: 3 I0111 14:49:30.754159 19 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:49:30.784: INFO: Creating new exec pod Jan 11 14:49:35.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 11 14:49:36.263: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 11 14:49:36.263: INFO: stdout: "" Jan 11 14:49:36.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c nc -zv -t -w 2 10.136.115.103 80' Jan 11 14:49:36.656: INFO: stderr: "+ nc -zv -t -w 2 10.136.115.103 80\nConnection to 10.136.115.103 80 port [tcp/http] succeeded!\n" Jan 11 14:49:36.656: INFO: stdout: "" Jan 11 14:49:36.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31070' Jan 11 14:49:37.006: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.6 31070\nConnection to 172.18.0.6 31070 port [tcp/31070] succeeded!\n" Jan 11 14:49:37.006: INFO: stdout: "" Jan 11 14:49:37.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31070' Jan 11 14:49:37.350: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 31070\nConnection to 172.18.0.5 31070 port [tcp/31070] succeeded!\n" Jan 11 14:49:37.350: INFO: stdout: "" Jan 11 14:49:37.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31070/ ; done' Jan 11 14:50:27.757: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n" Jan 11 14:50:27.757: INFO: stdout: "\naffinity-nodeport-transition-hn7ps\n" Jan 11 14:50:27.757: INFO: Received response from host: affinity-nodeport-transition-hn7ps Jan 11 14:50:57.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31070/ ; done' Jan 11 14:51:47.968: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n" Jan 11 14:51:47.968: INFO: stdout: "\naffinity-nodeport-transition-fkxdl\naffinity-nodeport-transition-fkxdl\naffinity-nodeport-transition-fkxdl\n" Jan 11 14:51:47.968: INFO: Received response from host: affinity-nodeport-transition-fkxdl Jan 11 14:51:47.968: INFO: Received response from host: affinity-nodeport-transition-fkxdl Jan 11 14:51:47.968: INFO: Received response from host: affinity-nodeport-transition-fkxdl Jan 11 14:51:57.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31070/ ; done' Jan 11 14:52:47.945: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n" Jan 11 14:52:47.945: INFO: stdout: "\naffinity-nodeport-transition-hn7ps\naffinity-nodeport-transition-hn7ps\n" Jan 11 14:52:47.945: INFO: Received response from host: affinity-nodeport-transition-hn7ps Jan 11 14:52:47.945: INFO: Received response from host: affinity-nodeport-transition-hn7ps Jan 11 14:52:47.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5015 exec execpod-affinity4qbgz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31070/ ; done' Jan 11 14:53:38.160: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31070/\n" Jan 11 14:53:38.160: INFO: stdout: "\n" Jan 11 14:53:38.160: INFO: [affinity-nodeport-transition-hn7ps affinity-nodeport-transition-fkxdl affinity-nodeport-transition-fkxdl affinity-nodeport-transition-fkxdl affinity-nodeport-transition-hn7ps affinity-nodeport-transition-hn7ps] Jan 11 14:53:38.160: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc001b471e0, 0xc000884000, 0xc0049d7130, 0xa, 0x795e, 0x0, 0xc000884000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001087340, 0x56112e0, 0xc001b471e0, 0xc000a5c000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3399 k8s.io/kubernetes/test/e2e/network.glob..func24.30() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2485 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003202300, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 11 14:53:38.161: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-5015, will wait for the garbage collector to delete the pods Jan 11 14:53:38.235: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.178638ms Jan 11 14:53:38.836: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.257537ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:53:46.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5015" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [283.541 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 11 14:53:38.160: Connection timed out or not enough responses.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:51:41.231: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics W0111 14:52:21.310073 18 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 14:57:21.313: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 11 14:57:21.313: INFO: Deleting pod "simpletest.rc-7frn5" in namespace "gc-1008" Jan 11 14:57:21.324: INFO: Deleting pod "simpletest.rc-d8ffs" in namespace "gc-1008" Jan 11 14:57:21.336: INFO: Deleting pod "simpletest.rc-dnrt8" in namespace "gc-1008" Jan 11 14:57:21.346: INFO: Deleting pod "simpletest.rc-f6lgr" in namespace "gc-1008" Jan 11 14:57:21.356: INFO: Deleting pod "simpletest.rc-jgm57" in namespace "gc-1008" Jan 11 14:57:21.369: INFO: Deleting pod "simpletest.rc-lhs5j" in namespace "gc-1008" Jan 11 14:57:21.378: INFO: Deleting pod "simpletest.rc-skhkd" in namespace "gc-1008" Jan 11 14:57:21.389: INFO: Deleting pod "simpletest.rc-vb6hk" in namespace "gc-1008" Jan 11 14:57:21.415: INFO: Deleting pod "simpletest.rc-xvpdc" in namespace "gc-1008" Jan 11 14:57:21.451: INFO: Deleting pod "simpletest.rc-zwct4" in namespace "gc-1008" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:57:21.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1008" for this suite. �[32m• [SLOW TEST:340.270 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should orphan pods created by rc if delete options say so [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":23,"skipped":288,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:57:21.614: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 11 14:57:21.689: INFO: Waiting up to 5m0s for pod "pod-1601d6b7-c647-4621-a468-236d42b5cafb" in namespace "emptydir-5770" to be "Succeeded or Failed" Jan 11 14:57:21.693: INFO: Pod "pod-1601d6b7-c647-4621-a468-236d42b5cafb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.222669ms Jan 11 14:57:23.698: INFO: Pod "pod-1601d6b7-c647-4621-a468-236d42b5cafb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008962019s �[1mSTEP�[0m: Saw pod success Jan 11 14:57:23.698: INFO: Pod "pod-1601d6b7-c647-4621-a468-236d42b5cafb" satisfied condition "Succeeded or Failed" Jan 11 14:57:23.701: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-md-0-sh4pd-7cd76d785d-wt6zv pod pod-1601d6b7-c647-4621-a468-236d42b5cafb container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:57:23.725: INFO: Waiting for pod pod-1601d6b7-c647-4621-a468-236d42b5cafb to disappear Jan 11 14:57:23.727: INFO: Pod pod-1601d6b7-c647-4621-a468-236d42b5cafb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:57:23.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5770" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":324,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:57:23.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-8478 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 11 14:57:23.833: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 14:57:23.867: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 14:57:25.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:27.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:29.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:31.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:33.872: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:35.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:37.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:39.871: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:41.871: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 11 14:57:41.876: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 11 14:57:41.881: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 11 14:57:41.885: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 11 14:57:43.899: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 11 14:57:43.899: INFO: Breadth first check of 192.168.1.47 on host 172.18.0.7... Jan 11 14:57:43.903: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=udp&host=192.168.1.47&port=8081&tries=1'] Namespace:pod-network-test-8478 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:57:43.903: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:57:43.994: INFO: Waiting for responses: map[] Jan 11 14:57:43.994: INFO: reached 192.168.1.47 after 0/1 tries Jan 11 14:57:43.994: INFO: Breadth first check of 192.168.0.50 on host 172.18.0.4... Jan 11 14:57:43.996: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=udp&host=192.168.0.50&port=8081&tries=1'] Namespace:pod-network-test-8478 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:57:43.996: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:57:44.053: INFO: Waiting for responses: map[] Jan 11 14:57:44.053: INFO: reached 192.168.0.50 after 0/1 tries Jan 11 14:57:44.053: INFO: Breadth first check of 192.168.6.78 on host 172.18.0.5... Jan 11 14:57:44.055: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=udp&host=192.168.6.78&port=8081&tries=1'] Namespace:pod-network-test-8478 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:57:44.055: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:57:44.138: INFO: Waiting for responses: map[] Jan 11 14:57:44.139: INFO: reached 192.168.6.78 after 0/1 tries Jan 11 14:57:44.139: INFO: Breadth first check of 192.168.3.54 on host 172.18.0.6... Jan 11 14:57:44.145: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.51:9080/dial?request=hostname&protocol=udp&host=192.168.3.54&port=8081&tries=1'] Namespace:pod-network-test-8478 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:57:44.145: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:57:44.230: INFO: Waiting for responses: map[] Jan 11 14:57:44.230: INFO: reached 192.168.3.54 after 0/1 tries Jan 11 14:57:44.230: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:57:44.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-8478" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":383,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:57:44.266: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:57:44.296: INFO: Waiting up to 5m0s for pod "downwardapi-volume-82119a78-e79f-4005-bdc2-d6718d03e795" in namespace "projected-2443" to be "Succeeded or Failed" Jan 11 14:57:44.298: INFO: Pod "downwardapi-volume-82119a78-e79f-4005-bdc2-d6718d03e795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.187063ms Jan 11 14:57:46.302: INFO: Pod "downwardapi-volume-82119a78-e79f-4005-bdc2-d6718d03e795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006010316s �[1mSTEP�[0m: Saw pod success Jan 11 14:57:46.302: INFO: Pod "downwardapi-volume-82119a78-e79f-4005-bdc2-d6718d03e795" satisfied condition "Succeeded or Failed" Jan 11 14:57:46.305: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod downwardapi-volume-82119a78-e79f-4005-bdc2-d6718d03e795 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:57:46.329: INFO: Waiting for pod downwardapi-volume-82119a78-e79f-4005-bdc2-d6718d03e795 to disappear Jan 11 14:57:46.331: INFO: Pod downwardapi-volume-82119a78-e79f-4005-bdc2-d6718d03e795 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:57:46.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2443" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":401,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:52:53.897: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics W0111 14:52:55.467366 16 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 11 14:57:55.472: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:57:55.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1821" for this suite. �[32m• [SLOW TEST:301.586 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":36,"skipped":692,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":699,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:53:46.379: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-1344 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-1344 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-1344 I0111 14:53:46.459133 19 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-1344, replica count: 3 I0111 14:53:49.509792 19 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 11 14:53:49.520: INFO: Creating new exec pod Jan 11 14:53:52.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 11 14:53:52.701: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 11 14:53:52.701: INFO: stdout: "" Jan 11 14:53:52.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c nc -zv -t -w 2 10.143.71.254 80' Jan 11 14:53:52.865: INFO: stderr: "+ nc -zv -t -w 2 10.143.71.254 80\nConnection to 10.143.71.254 80 port [tcp/http] succeeded!\n" Jan 11 14:53:52.865: INFO: stdout: "" Jan 11 14:53:52.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 30652' Jan 11 14:53:53.033: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.6 30652\nConnection to 172.18.0.6 30652 port [tcp/30652] succeeded!\n" Jan 11 14:53:53.033: INFO: stdout: "" Jan 11 14:53:53.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 30652' Jan 11 14:53:53.197: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 30652\nConnection to 172.18.0.4 30652 port [tcp/30652] succeeded!\n" Jan 11 14:53:53.197: INFO: stdout: "" Jan 11 14:53:53.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:30652/ ; done' Jan 11 14:54:43.410: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30652/\n" Jan 11 14:54:43.410: INFO: stdout: "\n" Jan 11 14:55:13.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:30652/ ; done' Jan 11 14:56:03.583: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30652/\n" Jan 11 14:56:03.583: INFO: stdout: "\n" Jan 11 14:56:13.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:30652/ ; done' Jan 11 14:57:03.593: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30652/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30652/\n" Jan 11 14:57:03.593: INFO: stdout: "\naffinity-nodeport-transition-g4624\n" Jan 11 14:57:03.593: INFO: Received response from host: affinity-nodeport-transition-g4624 Jan 11 14:57:03.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1344 exec execpod-affinitykml7s -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:30652/ ; done' Jan 11 14:57:53.780: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30652/\n" Jan 11 14:57:53.780: INFO: stdout: "\n" Jan 11 14:57:53.780: INFO: [affinity-nodeport-transition-g4624] Jan 11 14:57:53.780: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc0029bdce0, 0xc000da7000, 0xc001c89e30, 0xa, 0x77bc, 0x0, 0xc000da7000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001087340, 0x56112e0, 0xc0029bdce0, 0xc0005dd900, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3399 k8s.io/kubernetes/test/e2e/network.glob..func24.30() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2485 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003202300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003202300, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 11 14:57:53.781: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-1344, will wait for the garbage collector to delete the pods Jan 11 14:57:53.861: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.585824ms Jan 11 14:57:54.462: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 600.242169ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:06.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1344" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [259.924 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 11 14:57:53.780: Connection timed out or not enough responses.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":31,"skipped":699,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:57:46.357: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-1272 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 11 14:57:46.387: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 11 14:57:46.420: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 11 14:57:48.423: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:50.423: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:52.423: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:54.425: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:56.424: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:57:58.430: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 11 14:58:00.423: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 11 14:58:00.428: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 11 14:58:00.433: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 11 14:58:00.437: INFO: The status of Pod netserver-3 is Running (Ready = false) Jan 11 14:58:02.441: INFO: The status of Pod netserver-3 is Running (Ready = false) Jan 11 14:58:04.441: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 11 14:58:06.467: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 11 14:58:06.467: INFO: Going to poll 192.168.1.48 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 14:58:06.469: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.48 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1272 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:06.469: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:07.557: INFO: Found all 1 expected endpoints: [netserver-0] Jan 11 14:58:07.557: INFO: Going to poll 192.168.0.52 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 14:58:07.559: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.52 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1272 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:07.560: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:08.635: INFO: Found all 1 expected endpoints: [netserver-1] Jan 11 14:58:08.635: INFO: Going to poll 192.168.6.79 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 14:58:08.638: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.6.79 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1272 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:08.638: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:09.723: INFO: Found all 1 expected endpoints: [netserver-2] Jan 11 14:58:09.723: INFO: Going to poll 192.168.3.56 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 11 14:58:09.726: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.3.56 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1272 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:09.726: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:10.800: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:10.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-1272" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":414,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:06.312: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating the pod Jan 11 14:58:08.930: INFO: Successfully updated pod "labelsupdateb2e74c28-82d4-4f92-9916-fe781a1db741" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:12.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1150" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":702,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:10.950: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test env composition Jan 11 14:58:10.991: INFO: Waiting up to 5m0s for pod "var-expansion-ab370ab5-aeeb-406c-ad9d-6d7a66656f4e" in namespace "var-expansion-3444" to be "Succeeded or Failed" Jan 11 14:58:10.994: INFO: Pod "var-expansion-ab370ab5-aeeb-406c-ad9d-6d7a66656f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378258ms Jan 11 14:58:12.997: INFO: Pod "var-expansion-ab370ab5-aeeb-406c-ad9d-6d7a66656f4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005258038s �[1mSTEP�[0m: Saw pod success Jan 11 14:58:12.997: INFO: Pod "var-expansion-ab370ab5-aeeb-406c-ad9d-6d7a66656f4e" satisfied condition "Succeeded or Failed" Jan 11 14:58:12.999: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod var-expansion-ab370ab5-aeeb-406c-ad9d-6d7a66656f4e container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:58:13.011: INFO: Waiting for pod var-expansion-ab370ab5-aeeb-406c-ad9d-6d7a66656f4e to disappear Jan 11 14:58:13.013: INFO: Pod var-expansion-ab370ab5-aeeb-406c-ad9d-6d7a66656f4e no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:13.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3444" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":527,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:57:55.537: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-5018 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-5018 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-5018 I0111 14:57:55.617116 16 runners.go:190] Created replication controller with name: externalsvc, namespace: services-5018, replica count: 2 I0111 14:57:58.667629 16 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Jan 11 14:57:58.696: INFO: Creating new exec pod Jan 11 14:58:00.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5018 exec execpodvr5rj -- /bin/sh -x -c nslookup nodeport-service.services-5018.svc.cluster.local' Jan 11 14:58:00.925: INFO: stderr: "+ nslookup nodeport-service.services-5018.svc.cluster.local\n" Jan 11 14:58:00.925: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-5018.svc.cluster.local\tcanonical name = externalsvc.services-5018.svc.cluster.local.\nName:\texternalsvc.services-5018.svc.cluster.local\nAddress: 10.139.148.71\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-5018, will wait for the garbage collector to delete the pods Jan 11 14:58:00.984: INFO: Deleting ReplicationController externalsvc took: 6.530752ms Jan 11 14:58:01.085: INFO: Terminating ReplicationController externalsvc pods took: 100.256684ms Jan 11 14:58:16.334: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:16.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5018" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":37,"skipped":732,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:16.426: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Jan 11 14:58:19.026: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4586 pod-service-account-b832a319-cf63-40df-a644-3bffa2658c39 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Jan 11 14:58:19.225: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4586 pod-service-account-b832a319-cf63-40df-a644-3bffa2658c39 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Jan 11 14:58:19.422: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4586 pod-service-account-b832a319-cf63-40df-a644-3bffa2658c39 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:19.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-4586" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":38,"skipped":754,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:12.961: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jan 11 14:58:12.991: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the sample API server. Jan 11 14:58:13.586: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 11 14:58:15.639: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:58:17.642: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045893, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-67dc674868\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 11 14:58:20.566: INFO: Waited 917.906713ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:21.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-6503" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":33,"skipped":704,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:21.388: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:58:21.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1350fd26-c2e0-405f-84fc-c9b05566be5f" in namespace "downward-api-6509" to be "Succeeded or Failed" Jan 11 14:58:21.428: INFO: Pod "downwardapi-volume-1350fd26-c2e0-405f-84fc-c9b05566be5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.338858ms Jan 11 14:58:23.431: INFO: Pod "downwardapi-volume-1350fd26-c2e0-405f-84fc-c9b05566be5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011350874s �[1mSTEP�[0m: Saw pod success Jan 11 14:58:23.431: INFO: Pod "downwardapi-volume-1350fd26-c2e0-405f-84fc-c9b05566be5f" satisfied condition "Succeeded or Failed" Jan 11 14:58:23.435: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod downwardapi-volume-1350fd26-c2e0-405f-84fc-c9b05566be5f container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:58:23.448: INFO: Waiting for pod downwardapi-volume-1350fd26-c2e0-405f-84fc-c9b05566be5f to disappear Jan 11 14:58:23.451: INFO: Pod downwardapi-volume-1350fd26-c2e0-405f-84fc-c9b05566be5f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:23.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6509" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":720,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:23.486: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Jan 11 14:58:23.524: INFO: observed Pod pod-test in namespace pods-1526 in phase Pending conditions [] Jan 11 14:58:23.526: INFO: observed Pod pod-test in namespace pods-1526 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:58:23 +0000 UTC }] Jan 11 14:58:23.536: INFO: observed Pod pod-test in namespace pods-1526 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:58:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:58:23 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:58:23 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-11 14:58:23 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Jan 11 14:58:24.272: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: getting the PodStatus �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Jan 11 14:58:24.293: INFO: observed event type ADDED Jan 11 14:58:24.293: INFO: observed event type MODIFIED Jan 11 14:58:24.293: INFO: observed event type MODIFIED Jan 11 14:58:24.293: INFO: observed event type MODIFIED Jan 11 14:58:24.293: INFO: observed event type MODIFIED Jan 11 14:58:24.293: INFO: observed event type MODIFIED Jan 11 14:58:24.293: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:24.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1526" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":35,"skipped":739,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:24.312: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:26.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-7220" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":744,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:26.376: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 11 14:58:26.438: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c986aafb-1f08-4c37-acba-91dabf79d1dc" in namespace "downward-api-204" to be "Succeeded or Failed" Jan 11 14:58:26.441: INFO: Pod "downwardapi-volume-c986aafb-1f08-4c37-acba-91dabf79d1dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290093ms Jan 11 14:58:28.444: INFO: Pod "downwardapi-volume-c986aafb-1f08-4c37-acba-91dabf79d1dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005799104s �[1mSTEP�[0m: Saw pod success Jan 11 14:58:28.444: INFO: Pod "downwardapi-volume-c986aafb-1f08-4c37-acba-91dabf79d1dc" satisfied condition "Succeeded or Failed" Jan 11 14:58:28.447: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod downwardapi-volume-c986aafb-1f08-4c37-acba-91dabf79d1dc container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:58:28.461: INFO: Waiting for pod downwardapi-volume-c986aafb-1f08-4c37-acba-91dabf79d1dc to disappear Jan 11 14:58:28.463: INFO: Pod downwardapi-volume-c986aafb-1f08-4c37-acba-91dabf79d1dc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:28.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-204" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":745,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:13.044: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-dk47 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 11 14:58:13.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-dk47" in namespace "subpath-6419" to be "Succeeded or Failed" Jan 11 14:58:13.082: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Pending", Reason="", readiness=false. Elapsed: 2.321684ms Jan 11 14:58:15.086: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 2.006390806s Jan 11 14:58:17.090: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 4.010232983s Jan 11 14:58:19.094: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 6.013900617s Jan 11 14:58:21.100: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 8.019913681s Jan 11 14:58:23.104: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 10.024124726s Jan 11 14:58:25.108: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 12.027814304s Jan 11 14:58:27.111: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 14.030929666s Jan 11 14:58:29.116: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 16.035754925s Jan 11 14:58:31.119: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 18.039336659s Jan 11 14:58:33.122: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Running", Reason="", readiness=true. Elapsed: 20.042284618s Jan 11 14:58:35.126: INFO: Pod "pod-subpath-test-downwardapi-dk47": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.046360834s �[1mSTEP�[0m: Saw pod success Jan 11 14:58:35.126: INFO: Pod "pod-subpath-test-downwardapi-dk47" satisfied condition "Succeeded or Failed" Jan 11 14:58:35.129: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod pod-subpath-test-downwardapi-dk47 container test-container-subpath-downwardapi-dk47: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:58:35.143: INFO: Waiting for pod pod-subpath-test-downwardapi-dk47 to disappear Jan 11 14:58:35.146: INFO: Pod pod-subpath-test-downwardapi-dk47 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-dk47 Jan 11 14:58:35.146: INFO: Deleting pod "pod-subpath-test-downwardapi-dk47" in namespace "subpath-6419" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:35.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-6419" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":543,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:28.514: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 11 14:58:28.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3976 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 11 14:58:28.653: INFO: stderr: "" Jan 11 14:58:28.653: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 11 14:58:33.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3976 get pod e2e-test-httpd-pod -o json' Jan 11 14:58:33.803: INFO: stderr: "" Jan 11 14:58:33.803: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-01-11T14:58:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-11T14:58:28Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"192.168.3.61\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-11T14:58:29Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-3976\",\n \"resourceVersion\": \"13874\",\n \"uid\": \"7cbe7acf-74cc-41ce-9195-eee82d56273e\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-4xvp7\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-dctc5v-worker-cvzb96\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-4xvp7\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-4xvp7\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T14:58:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T14:58:29Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T14:58:29Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-11T14:58:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3475438df678c5209d53e5ecb2abe88d2adcd84db048b03b5bfc678872e41dc3\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-11T14:58:29Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.3.61\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.3.61\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-11T14:58:28Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 11 14:58:33.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3976 replace -f -' Jan 11 14:58:34.762: INFO: stderr: "" Jan 11 14:58:34.762: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Jan 11 14:58:34.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3976 delete pods e2e-test-httpd-pod' Jan 11 14:58:36.569: INFO: stderr: "" Jan 11 14:58:36.569: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:36.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3976" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":38,"skipped":773,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:35.175: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a volume subpath [sig-storage] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test substitution in volume subpath Jan 11 14:58:35.211: INFO: Waiting up to 5m0s for pod "var-expansion-c369c814-8673-4b45-a781-8f2267778f5a" in namespace "var-expansion-2540" to be "Succeeded or Failed" Jan 11 14:58:35.214: INFO: Pod "var-expansion-c369c814-8673-4b45-a781-8f2267778f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.621026ms Jan 11 14:58:37.217: INFO: Pod "var-expansion-c369c814-8673-4b45-a781-8f2267778f5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005669888s �[1mSTEP�[0m: Saw pod success Jan 11 14:58:37.217: INFO: Pod "var-expansion-c369c814-8673-4b45-a781-8f2267778f5a" satisfied condition "Succeeded or Failed" Jan 11 14:58:37.219: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-2py7ys pod var-expansion-c369c814-8673-4b45-a781-8f2267778f5a container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:58:37.233: INFO: Waiting for pod var-expansion-c369c814-8673-4b45-a781-8f2267778f5a to disappear Jan 11 14:58:37.235: INFO: Pod var-expansion-c369c814-8673-4b45-a781-8f2267778f5a no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:37.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-2540" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage] [Conformance]","total":-1,"completed":30,"skipped":557,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:36.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod �[1mSTEP�[0m: Creating hostNetwork=true pod �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 11 14:58:40.657: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:40.657: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:40.742: INFO: Exec stderr: "" Jan 11 14:58:40.742: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:40.742: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:40.814: INFO: Exec stderr: "" Jan 11 14:58:40.814: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:40.814: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:40.891: INFO: Exec stderr: "" Jan 11 14:58:40.891: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:40.891: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:40.939: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 11 14:58:40.939: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:40.939: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:41.021: INFO: Exec stderr: "" Jan 11 14:58:41.021: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:41.021: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:41.077: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 11 14:58:41.077: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:41.077: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:41.174: INFO: Exec stderr: "" Jan 11 14:58:41.174: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:41.174: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:41.247: INFO: Exec stderr: "" Jan 11 14:58:41.247: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:41.247: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:41.306: INFO: Exec stderr: "" Jan 11 14:58:41.306: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8547 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:41.306: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:41.392: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:41.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-8547" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":781,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:41.442: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:58:41.510: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"8d642668-93ae-4c53-90d2-53cba8ea9e63", Controller:(*bool)(0xc00338935a), BlockOwnerDeletion:(*bool)(0xc00338935b)}} Jan 11 14:58:41.517: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"edb13325-aee1-48bc-8218-b0e89fd41548", Controller:(*bool)(0xc0038ad28e), BlockOwnerDeletion:(*bool)(0xc0038ad28f)}} Jan 11 14:58:41.527: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"b794b12d-6805-4dee-b50d-fcc7e0c74598", Controller:(*bool)(0xc00338954a), BlockOwnerDeletion:(*bool)(0xc00338954b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:46.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5166" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":40,"skipped":802,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:46.575: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating Pod �[1mSTEP�[0m: Reading file content from the nginx-container Jan 11 14:58:48.634: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-271 PodName:pod-sharedvolume-9de7d40b-8149-4eed-a2ee-486f8ed1c6c8 ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 11 14:58:48.634: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 11 14:58:48.723: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:48.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-271" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":41,"skipped":810,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:48.749: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 11 14:58:49.265: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 11 14:58:51.276: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045929, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045929, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045929, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809045929, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 11 14:58:54.290: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 11 14:58:54.293: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-279-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:55.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1325" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1325-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":42,"skipped":824,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:55.585: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 11 14:58:56.634: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-992" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":841,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:56.698: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-0dea435b-7a63-48bb-94ce-ab199e8c2e28 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 11 14:58:56.737: INFO: Waiting up to 5m0s for pod "pod-configmaps-a0097d5c-9216-4c5d-95dc-f1dbf0bee232" in namespace "configmap-6786" to be "Succeeded or Failed" Jan 11 14:58:56.740: INFO: Pod "pod-configmaps-a0097d5c-9216-4c5d-95dc-f1dbf0bee232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.244866ms Jan 11 14:58:58.743: INFO: Pod "pod-configmaps-a0097d5c-9216-4c5d-95dc-f1dbf0bee232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005517996s �[1mSTEP�[0m: Saw pod success Jan 11 14:58:58.743: INFO: Pod "pod-configmaps-a0097d5c-9216-4c5d-95dc-f1dbf0bee232" satisfied condition "Succeeded or Failed" Jan 11 14:58:58.745: INFO: Trying to get logs from node k8s-upgrade-and-conformance-dctc5v-worker-cvzb96 pod pod-configmaps-a0097d5c-9216-4c5d-95dc-f1dbf0bee232 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 11 14:58:58.761: INFO: Waiting for pod pod-configmaps-a0097d5c-9216-4c5d-95dc-f1dbf0bee232 to disappear Jan 11 14:58:58.764: INFO: Pod pod-configmaps-a0097d5c-9216-4c5d-95dc-f1dbf0bee232 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:58:58.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6786" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":871,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:37.246: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7680.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7680.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 14:58:39.306: INFO: DNS probes using dns-test-e7a3fac4-4e40-4c73-9f55-183c560709b6 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7680.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7680.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 14:58:41.350: INFO: File wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:41.354: INFO: File jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:41.354: INFO: Lookups using dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 failed for: [wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local] Jan 11 14:58:46.358: INFO: File wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:46.361: INFO: File jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:46.361: INFO: Lookups using dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 failed for: [wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local] Jan 11 14:58:51.358: INFO: File wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:51.361: INFO: File jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:51.361: INFO: Lookups using dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 failed for: [wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local] Jan 11 14:58:56.357: INFO: File wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:56.360: INFO: File jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:58:56.360: INFO: Lookups using dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 failed for: [wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local] Jan 11 14:59:01.357: INFO: File wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:59:01.361: INFO: File jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:59:01.361: INFO: Lookups using dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 failed for: [wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local] Jan 11 14:59:06.358: INFO: File wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:59:06.361: INFO: File jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local from pod dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 11 14:59:06.361: INFO: Lookups using dns-7680/dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 failed for: [wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local] Jan 11 14:59:11.364: INFO: DNS probes using dns-test-af6885f8-ed56-47a0-96cd-468f2d1a03c7 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7680.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7680.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7680.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7680.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 11 14:59:13.432: INFO: DNS probes using dns-test-f200102b-6596-45a0-b155-0621918c45f0 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 11 14:59:13.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-7680" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":31,"skipped":559,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 11 14:58:58.780: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-1125 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-1125 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-1125 Jan 11 14:58:58.827: INFO: Found 0 stateful pods, waiting for 1 Jan 11 14:59:08.831: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 11 14:59:08.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1125 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 11 14:59:09.018: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 11 14:59:09.018: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 11 14:59:09.018: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 11 14:59:09.021: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 11 14:59:19.026: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 11 14:59:19.026: INFO: Waiting for statefulset status.replicas updated to 0 Jan 11 14:59:19.037: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999611s Jan 11 14:59:20.041: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.99706163s Jan 11 14:59:21.045: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.993286481s Jan 11 14:59:22.048: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.989412884s Jan 11 14:59:23.052: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.985835865s Jan 11 14:59:24.056: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.981741291s Jan 11 14:59:25.060: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.97823283s Jan 11 14:59:26.063: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.974475795s Jan 11 14:59:27.067: INFO: Verifying statefulset ss