Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 1h11m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc001b36a08>: { error: <*errors.withMessage | 0xc001788260>{ cause: <*errors.errorString | 0xc001333240>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1a98018, 0x1adc429, 0x7b9731, 0x7b9125, 0x7b87fb, 0x7be569, 0x7bdf52, 0x7df031, 0x7ded56, 0x7de3a5, 0x7e07e5, 0x7ec9c9, 0x7ec7de, 0x1af7d32, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-5na568 INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-5na568" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-4w1i3t" using the "upgrades-cgroupfs" template (Kubernetes v1.19.16, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-4w1i3t --infrastructure (default) --kubernetes-version v1.19.16 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-4w1i3t-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-4w1i3t-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-4w1i3t-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-4w1i3t-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-4w1i3t created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-4w1i3t-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-4w1i3t-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-5na568/k8s-upgrade-and-conformance-4w1i3t-ps8cv to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-5na568/k8s-upgrade-and-conformance-4w1i3t-ps8cv to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.20.15 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-5na568/k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf to be upgraded to v1.20.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.20.15 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-5na568/k8s-upgrade-and-conformance-4w1i3t-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-5na568/k8s-upgrade-and-conformance-4w1i3t-mp-0 to be upgraded from v1.19.16 to v1.20.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.20.15 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1673620898�[0m - Will randomize all specs Will run �[1m5668�[0m specs Running in parallel across �[1m4�[0m nodes Jan 13 14:41:40.329: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:41:40.332: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 13 14:41:40.348: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 13 14:41:40.385: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:40.385: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:40.385: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:40.385: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:40.385: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:40.386: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 13 14:41:40.386: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:40.386: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:40.386: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:40.386: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:40.386: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:40.386: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:40.386: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:40.386: INFO: Jan 13 14:41:42.407: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:42.407: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:42.407: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:42.407: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:42.407: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:42.407: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 13 14:41:42.407: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:42.407: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:42.407: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:42.407: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:42.407: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:42.407: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:42.407: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:42.407: INFO: Jan 13 14:41:44.404: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:44.404: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:44.404: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:44.404: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:44.404: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:44.404: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Jan 13 14:41:44.404: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:44.404: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:44.404: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:44.404: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:44.404: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:44.405: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:44.405: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:44.405: INFO: Jan 13 14:41:46.407: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:46.407: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:46.407: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:46.407: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:46.408: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:46.408: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Jan 13 14:41:46.408: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:46.408: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:46.408: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:46.408: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:46.408: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:46.408: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:46.408: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:46.408: INFO: Jan 13 14:41:48.404: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:48.404: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:48.404: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:48.404: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:48.404: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:48.405: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Jan 13 14:41:48.405: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:48.405: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:48.405: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:48.405: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:48.405: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:48.405: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:48.405: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:48.405: INFO: Jan 13 14:41:50.405: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:50.405: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:50.405: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:50.405: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:50.405: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:50.405: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Jan 13 14:41:50.405: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:50.405: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:50.405: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:50.405: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:50.405: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:50.405: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:50.405: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:50.405: INFO: Jan 13 14:41:52.404: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:52.404: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:52.404: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:52.404: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:52.404: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:52.404: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Jan 13 14:41:52.404: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:52.404: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:52.404: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:52.404: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:52.404: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:52.404: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:52.404: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:52.404: INFO: Jan 13 14:41:54.404: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:54.405: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:54.405: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:54.405: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:54.405: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:54.405: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (14 seconds elapsed) Jan 13 14:41:54.405: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:54.405: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:54.405: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:54.405: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:54.405: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:54.405: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:54.405: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:54.405: INFO: Jan 13 14:41:56.409: INFO: The status of Pod coredns-f9fd979d6-hfqsz is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:56.409: INFO: The status of Pod kindnet-85kpq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:56.409: INFO: The status of Pod kindnet-tmgbn is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:56.409: INFO: The status of Pod kube-proxy-mzskp is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:56.409: INFO: The status of Pod kube-proxy-s8bqx is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:56.409: INFO: 15 / 20 pods in namespace 'kube-system' are running and ready (16 seconds elapsed) Jan 13 14:41:56.409: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:56.409: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:56.409: INFO: coredns-f9fd979d6-hfqsz k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:10 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:39:05 +0000 UTC }] Jan 13 14:41:56.409: INFO: kindnet-85kpq k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:07 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:33:05 +0000 UTC }] Jan 13 14:41:56.409: INFO: kindnet-tmgbn k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:59 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:32:49 +0000 UTC }] Jan 13 14:41:56.409: INFO: kube-proxy-mzskp k8s-upgrade-and-conformance-4w1i3t-worker-b5ayd5 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:38:38 +0000 UTC }] Jan 13 14:41:56.409: INFO: kube-proxy-s8bqx k8s-upgrade-and-conformance-4w1i3t-worker-ge3sx7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:02 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:40:01 +0000 UTC }] Jan 13 14:41:56.409: INFO: Jan 13 14:41:58.401: INFO: The status of Pod coredns-f9fd979d6-pggdh is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:41:58.402: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (18 seconds elapsed) Jan 13 14:41:58.402: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:41:58.402: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:41:58.402: INFO: coredns-f9fd979d6-pggdh k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC }] Jan 13 14:41:58.402: INFO: Jan 13 14:42:00.406: INFO: The status of Pod coredns-f9fd979d6-pggdh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:42:00.406: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (20 seconds elapsed) Jan 13 14:42:00.406: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:42:00.406: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:42:00.406: INFO: coredns-f9fd979d6-pggdh k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC }] Jan 13 14:42:00.406: INFO: Jan 13 14:42:02.408: INFO: The status of Pod coredns-f9fd979d6-pggdh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:42:02.408: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (22 seconds elapsed) Jan 13 14:42:02.408: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:42:02.408: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:42:02.408: INFO: coredns-f9fd979d6-pggdh k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC }] Jan 13 14:42:02.408: INFO: Jan 13 14:42:04.401: INFO: The status of Pod coredns-f9fd979d6-pggdh is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 13 14:42:04.401: INFO: 15 / 16 pods in namespace 'kube-system' are running and ready (24 seconds elapsed) Jan 13 14:42:04.401: INFO: expected 2 pod replicas in namespace 'kube-system', 1 are Running and Ready. Jan 13 14:42:04.401: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:42:04.401: INFO: coredns-f9fd979d6-pggdh k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:41:57 +0000 UTC }] Jan 13 14:42:04.401: INFO: Jan 13 14:42:06.400: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (26 seconds elapsed) Jan 13 14:42:06.400: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 13 14:42:06.400: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 13 14:42:06.408: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 13 14:42:06.408: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 13 14:42:06.408: INFO: e2e test version: v1.20.15 Jan 13 14:42:06.410: INFO: kube-apiserver version: v1.20.15 Jan 13 14:42:06.410: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:42:06.415: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 13 14:42:06.436: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:42:06.455: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 13 14:42:06.444: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:42:06.459: INFO: Cluster IP family: ipv4 �[36mS�[0m �[90m------------------------------�[0m Jan 13 14:42:06.444: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:42:06.465: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:06.621: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts Jan 13 14:42:06.654: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:06.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-7669" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":1,"skipped":90,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:06.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime Jan 13 14:42:06.554: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 13 14:42:09.601: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:09.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-828" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":21,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:06.452: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api Jan 13 14:42:06.506: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 14:42:06.518: INFO: Waiting up to 5m0s for pod "downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d" in namespace "downward-api-4127" to be "Succeeded or Failed" Jan 13 14:42:06.530: INFO: Pod "downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.069336ms Jan 13 14:42:08.536: INFO: Pod "downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012444996s Jan 13 14:42:10.540: INFO: Pod "downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016537819s Jan 13 14:42:12.544: INFO: Pod "downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020909908s �[1mSTEP�[0m: Saw pod success Jan 13 14:42:12.544: INFO: Pod "downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d" satisfied condition "Succeeded or Failed" Jan 13 14:42:12.548: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-ceauut pod downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:42:12.593: INFO: Waiting for pod downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d to disappear Jan 13 14:42:12.596: INFO: Pod downwardapi-volume-30142d3e-33fd-4003-857f-6fbf4ac76d1d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:12.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4127" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":25,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:12.701: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating pod Jan 13 14:42:14.771: INFO: Pod pod-hostip-d053cc12-ec31-4fb2-89fc-124523b12276 has hostIP: 172.18.0.6 [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:14.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5694" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":76,"failed":0} [BeforeEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:14.784: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:14.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-3400" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":-1,"completed":3,"skipped":76,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:14.906: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1520 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 13 14:42:14.940: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8905 run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine' Jan 13 14:42:15.345: INFO: stderr: "" Jan 13 14:42:15.345: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1524 Jan 13 14:42:15.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8905 delete pods e2e-test-httpd-pod' Jan 13 14:42:18.719: INFO: stderr: "" Jan 13 14:42:18.719: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:18.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8905" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":4,"skipped":77,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:18.732: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Jan 13 14:42:18.791: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9103 2188b760-ba22-4c9f-8d0a-4ac1dfc7fa6c 2768 0 2023-01-13 14:42:18 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-13 14:42:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 14:42:18.792: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9103 2188b760-ba22-4c9f-8d0a-4ac1dfc7fa6c 2769 0 2023-01-13 14:42:18 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-13 14:42:18 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:18.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-9103" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":5,"skipped":78,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:06.758: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8831.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8831.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8831.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8831.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8831.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8831.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 14:42:18.870: INFO: DNS probes using dns-8831/dns-test-38e05c8d-696d-472f-a934-bac5b8cafbe9 succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:18.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8831" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:06.492: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota Jan 13 14:42:06.536: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a ResourceQuota with terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a long running pod �[1mSTEP�[0m: Ensuring resource quota with not terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a terminating pod �[1mSTEP�[0m: Ensuring resource quota with terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:22.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4618" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":1,"skipped":12,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:22.685: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-aac54085-0702-434d-b326-6978f68b0f5a �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 14:42:22.729: INFO: Waiting up to 5m0s for pod "pod-secrets-c02c28a1-56e5-4ab0-952f-3cb3743c537b" in namespace "secrets-1386" to be "Succeeded or Failed" Jan 13 14:42:22.732: INFO: Pod "pod-secrets-c02c28a1-56e5-4ab0-952f-3cb3743c537b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.818206ms Jan 13 14:42:24.737: INFO: Pod "pod-secrets-c02c28a1-56e5-4ab0-952f-3cb3743c537b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007251174s �[1mSTEP�[0m: Saw pod success Jan 13 14:42:24.737: INFO: Pod "pod-secrets-c02c28a1-56e5-4ab0-952f-3cb3743c537b" satisfied condition "Succeeded or Failed" Jan 13 14:42:24.739: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-secrets-c02c28a1-56e5-4ab0-952f-3cb3743c537b container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:42:24.768: INFO: Waiting for pod pod-secrets-c02c28a1-56e5-4ab0-952f-3cb3743c537b to disappear Jan 13 14:42:24.770: INFO: Pod pod-secrets-c02c28a1-56e5-4ab0-952f-3cb3743c537b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:24.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1386" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:24.781: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Starting the proxy Jan 13 14:42:24.827: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1684 proxy --unix-socket=/tmp/kubectl-proxy-unix795039760/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:24.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1684" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":133,"failed":0} [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:18.917: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Jan 13 14:42:19.000: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Jan 13 14:42:19.011: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 13 14:42:19.011: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Jan 13 14:42:19.040: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 13 14:42:19.040: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Jan 13 14:42:19.055: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Jan 13 14:42:19.056: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Jan 13 14:42:26.101: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:26.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-9515" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":3,"skipped":133,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:26.216: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-c45fcada-15ef-4785-85c7-b9941491ecd3 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:42:26.258: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5119991-71aa-48be-a291-8da827514cae" in namespace "configmap-7963" to be "Succeeded or Failed" Jan 13 14:42:26.262: INFO: Pod "pod-configmaps-a5119991-71aa-48be-a291-8da827514cae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.783283ms Jan 13 14:42:28.265: INFO: Pod "pod-configmaps-a5119991-71aa-48be-a291-8da827514cae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007459835s �[1mSTEP�[0m: Saw pod success Jan 13 14:42:28.265: INFO: Pod "pod-configmaps-a5119991-71aa-48be-a291-8da827514cae" satisfied condition "Succeeded or Failed" Jan 13 14:42:28.268: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-configmaps-a5119991-71aa-48be-a291-8da827514cae container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:42:28.282: INFO: Waiting for pod pod-configmaps-a5119991-71aa-48be-a291-8da827514cae to disappear Jan 13 14:42:28.284: INFO: Pod pod-configmaps-a5119991-71aa-48be-a291-8da827514cae no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:28.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7963" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":198,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:24.941: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:28.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-6936" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":54,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:28.304: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 14:42:28.349: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a50a3dc-ccdc-40dc-a44a-3aabe2ae6c6e" in namespace "downward-api-5956" to be "Succeeded or Failed" Jan 13 14:42:28.353: INFO: Pod "downwardapi-volume-4a50a3dc-ccdc-40dc-a44a-3aabe2ae6c6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.985935ms Jan 13 14:42:30.356: INFO: Pod "downwardapi-volume-4a50a3dc-ccdc-40dc-a44a-3aabe2ae6c6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006662963s �[1mSTEP�[0m: Saw pod success Jan 13 14:42:30.356: INFO: Pod "downwardapi-volume-4a50a3dc-ccdc-40dc-a44a-3aabe2ae6c6e" satisfied condition "Succeeded or Failed" Jan 13 14:42:30.359: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod downwardapi-volume-4a50a3dc-ccdc-40dc-a44a-3aabe2ae6c6e container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:42:30.386: INFO: Waiting for pod downwardapi-volume-4a50a3dc-ccdc-40dc-a44a-3aabe2ae6c6e to disappear Jan 13 14:42:30.389: INFO: Pod downwardapi-volume-4a50a3dc-ccdc-40dc-a44a-3aabe2ae6c6e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:30.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5956" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":205,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:18.828: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Jan 13 14:42:18.894: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:32.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1392" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":83,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:30.412: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 14:42:31.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 14:42:33.148: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809217751, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809217751, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809217751, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809217751, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 14:42:36.163: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:36.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6222" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6222-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":6,"skipped":215,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:36.385: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 14:42:37.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 14:42:40.078: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:42:40.082: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-8509-crds.webhook.example.com via the AdmissionRegistration API Jan 13 14:42:40.617: INFO: Waiting for webhook configuration to be ready... �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:41.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6803" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6803-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":7,"skipped":302,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:29.116: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-95qn �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 13 14:42:29.166: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-95qn" in namespace "subpath-7662" to be "Succeeded or Failed" Jan 13 14:42:29.168: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628008ms Jan 13 14:42:31.171: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 2.005372617s Jan 13 14:42:33.177: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 4.010954688s Jan 13 14:42:35.185: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 6.019364539s Jan 13 14:42:37.189: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 8.023323142s Jan 13 14:42:39.192: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 10.026719482s Jan 13 14:42:41.198: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 12.032604944s Jan 13 14:42:43.202: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 14.036448995s Jan 13 14:42:45.207: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 16.041464468s Jan 13 14:42:47.211: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 18.04517494s Jan 13 14:42:49.215: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 20.049134401s Jan 13 14:42:51.220: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Running", Reason="", readiness=true. Elapsed: 22.054794349s Jan 13 14:42:53.225: INFO: Pod "pod-subpath-test-downwardapi-95qn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059036206s �[1mSTEP�[0m: Saw pod success Jan 13 14:42:53.225: INFO: Pod "pod-subpath-test-downwardapi-95qn" satisfied condition "Succeeded or Failed" Jan 13 14:42:53.227: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-subpath-test-downwardapi-95qn container test-container-subpath-downwardapi-95qn: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:42:53.246: INFO: Waiting for pod pod-subpath-test-downwardapi-95qn to disappear Jan 13 14:42:53.249: INFO: Pod pod-subpath-test-downwardapi-95qn no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-95qn Jan 13 14:42:53.249: INFO: Deleting pod "pod-subpath-test-downwardapi-95qn" in namespace "subpath-7662" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:53.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-7662" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":146,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:53.283: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 13 14:42:53.330: INFO: Waiting up to 5m0s for pod "pod-1214e56e-228a-474f-bb48-6ff70266213c" in namespace "emptydir-7088" to be "Succeeded or Failed" Jan 13 14:42:53.333: INFO: Pod "pod-1214e56e-228a-474f-bb48-6ff70266213c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958913ms Jan 13 14:42:55.338: INFO: Pod "pod-1214e56e-228a-474f-bb48-6ff70266213c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008589286s �[1mSTEP�[0m: Saw pod success Jan 13 14:42:55.338: INFO: Pod "pod-1214e56e-228a-474f-bb48-6ff70266213c" satisfied condition "Succeeded or Failed" Jan 13 14:42:55.351: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-1214e56e-228a-474f-bb48-6ff70266213c container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:42:55.391: INFO: Waiting for pod pod-1214e56e-228a-474f-bb48-6ff70266213c to disappear Jan 13 14:42:55.394: INFO: Pod pod-1214e56e-228a-474f-bb48-6ff70266213c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:42:55.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7088" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":158,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:55.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: retrieving the pod Jan 13 14:42:57.492: INFO: &Pod{ObjectMeta:{send-events-32dd1476-ff74-40ea-b531-bada617c7ea1 events-3340 550315fe-d604-4a1a-bfb5-9a038b3888fe 3413 0 2023-01-13 14:42:55 +0000 UTC <nil> <nil> map[name:foo time:472957854] map[] [] [] [{e2e.test Update v1 2023-01-13 14:42:55 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:time":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"p\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-13 14:42:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7rhfj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7rhfj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:p,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7rhfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:42:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:42:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:42:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:42:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.14,StartTime:2023-01-13 14:42:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-13 14:42:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://4cbe23c0f73ada5c2746e131daae2717bf02abcb04f2bf26c7eb1f1dd497d7d0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} �[1mSTEP�[0m: checking for scheduler event about the pod Jan 13 14:42:59.497: INFO: Saw scheduler event for our pod. �[1mSTEP�[0m: checking for kubelet event about the pod Jan 13 14:43:01.501: INFO: Saw kubelet event for our pod. �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:43:01.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-3340" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":-1,"completed":7,"skipped":183,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:43:01.523: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1554 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 13 14:43:01.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 13 14:43:01.652: INFO: stderr: "" Jan 13 14:43:01.652: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 13 14:43:06.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 get pod e2e-test-httpd-pod -o json' Jan 13 14:43:06.790: INFO: stderr: "" Jan 13 14:43:06.790: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-01-13T14:43:01Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl-run\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-13T14:43:01Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"192.168.0.15\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2023-01-13T14:43:03Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2974\",\n \"resourceVersion\": \"3473\",\n \"uid\": \"70c1fe45-f029-40e2-b2ca-504a5e074793\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-6xv94\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-6xv94\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-6xv94\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-13T14:43:01Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-13T14:43:03Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-13T14:43:03Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-13T14:43:01Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://ed8286e3c518bbcdf5a0e7d5af31ebc59c4d6efecb6d7e5a914ec9903c954cbb\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-13T14:43:02Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.0.15\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.0.15\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-13T14:43:01Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 13 14:43:06.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 replace -f -' Jan 13 14:43:07.777: INFO: stderr: "" Jan 13 14:43:07.777: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1558 Jan 13 14:43:07.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2974 delete pods e2e-test-httpd-pod' Jan 13 14:43:09.293: INFO: stderr: "" Jan 13 14:43:09.293: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:43:09.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2974" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":8,"skipped":186,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:43:09.373: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:43:09.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9952 create -f -' Jan 13 14:43:09.629: INFO: stderr: "" Jan 13 14:43:09.629: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 13 14:43:09.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9952 create -f -' Jan 13 14:43:09.859: INFO: stderr: "" Jan 13 14:43:09.859: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 13 14:43:10.864: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 14:43:10.864: INFO: Found 0 / 1 Jan 13 14:43:11.863: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 14:43:11.863: INFO: Found 1 / 1 Jan 13 14:43:11.863: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 14:43:11.865: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 14:43:11.865: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 14:43:11.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9952 describe pod agnhost-primary-9j74c' Jan 13 14:43:11.965: INFO: stderr: "" Jan 13 14:43:11.965: INFO: stdout: "Name: agnhost-primary-9j74c\nNamespace: kubectl-9952\nPriority: 0\nNode: k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr/172.18.0.4\nStart Time: Fri, 13 Jan 2023 14:43:09 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.16\nIPs:\n IP: 192.168.0.16\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://d493e3d1e50e274cb573d37d8caf31cc04abfea1144be141c2eeb40dd3032bdf\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 13 Jan 2023 14:43:10 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-bjn9f (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-bjn9f:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-bjn9f\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-9952/agnhost-primary-9j74c to k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.21\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Jan 13 14:43:11.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9952 describe rc agnhost-primary' Jan 13 14:43:12.081: INFO: stderr: "" Jan 13 14:43:12.082: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9952\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.21\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-primary-9j74c\n" Jan 13 14:43:12.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9952 describe service agnhost-primary' Jan 13 14:43:12.187: INFO: stderr: "" Jan 13 14:43:12.187: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-9952\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Families: <none>\nIP: 10.138.85.105\nIPs: 10.138.85.105\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.0.16:6379\nSession Affinity: None\nEvents: <none>\n" Jan 13 14:43:12.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9952 describe node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s' Jan 13 14:43:12.324: INFO: stderr: "" Jan 13 14:43:12.324: INFO: stdout: "Name: k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s\nRoles: <none>\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s\n kubernetes.io/os=linux\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-4w1i3t\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-5na568\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s\n cluster.x-k8s.io/owner-kind: MachineSet\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 13 Jan 2023 14:39:45 +0000\nTaints: <none>\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s\n AcquireTime: <unset>\n RenewTime: Fri, 13 Jan 2023 14:43:08 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 13 Jan 2023 14:43:05 +0000 Fri, 13 Jan 2023 14:39:44 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 13 Jan 2023 14:43:05 +0000 Fri, 13 Jan 2023 14:39:44 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 13 Jan 2023 14:43:05 +0000 Fri, 13 Jan 2023 14:39:44 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 13 Jan 2023 14:43:05 +0000 Fri, 13 Jan 2023 14:40:05 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s\nCapacity:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nSystem Info:\n Machine ID: 49883f781b68410a98364581454bb4e0\n System UUID: dc28442b-c96e-4d5b-9f41-0d22b7ce510c\n Boot ID: 3fa13983-c193-4f19-955e-f0cfe2f91a25\n Kernel Version: 5.4.0-1081-gke\n OS Image: Ubuntu 22.04.1 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.9\n Kubelet Version: v1.20.15\n Kube-Proxy Version: v1.20.15\nPodCIDR: 192.168.1.0/24\nPodCIDRs: 192.168.1.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s\nNon-terminated Pods: (5 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-f9fd979d6-pncqm 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 3m7s\n kube-system kindnet-9tjrv 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 3m27s\n kube-system kube-proxy-fhfwg 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3m27s\n services-79 affinity-clusterip-4zdhs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 63s\n statefulset-4454 ss2-1 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 200m (2%) 100m (1%)\n memory 120Mi (0%) 220Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 3m20s kube-proxy Starting kube-proxy.\n" Jan 13 14:43:12.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9952 describe namespace kubectl-9952' Jan 13 14:43:12.430: INFO: stderr: "" Jan 13 14:43:12.430: INFO: stdout: "Name: kubectl-9952\nLabels: e2e-framework=kubectl\n e2e-run=abd13cee-3540-4283-9a6d-bcfef0bc2e24\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:43:12.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9952" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":9,"skipped":230,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:09.667: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-79 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-79 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-79 I0113 14:42:09.724556 16 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-79, replica count: 3 I0113 14:42:12.775143 16 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 14:42:15.775522 16 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 14:42:15.785: INFO: Creating new exec pod Jan 13 14:42:18.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:21.158: INFO: rc: 1 Jan 13 14:42:21.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:22.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:24.360: INFO: rc: 1 Jan 13 14:42:24.360: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:25.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:27.356: INFO: rc: 1 Jan 13 14:42:27.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:28.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:30.332: INFO: rc: 1 Jan 13 14:42:30.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:31.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:33.372: INFO: rc: 1 Jan 13 14:42:33.372: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:34.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:36.371: INFO: rc: 1 Jan 13 14:42:36.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:37.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:39.358: INFO: rc: 1 Jan 13 14:42:39.358: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:40.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:42.368: INFO: rc: 1 Jan 13 14:42:42.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:43.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:45.343: INFO: rc: 1 Jan 13 14:42:45.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:46.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:48.341: INFO: rc: 1 Jan 13 14:42:48.342: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:49.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:51.356: INFO: rc: 1 Jan 13 14:42:51.356: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:52.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:54.336: INFO: rc: 1 Jan 13 14:42:54.336: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:42:57.403: INFO: rc: 1 Jan 13 14:42:57.403: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:42:58.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:00.337: INFO: rc: 1 Jan 13 14:43:00.337: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:03.341: INFO: rc: 1 Jan 13 14:43:03.341: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:04.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:06.328: INFO: rc: 1 Jan 13 14:43:06.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:07.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:09.363: INFO: rc: 1 Jan 13 14:43:09.363: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:10.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:12.338: INFO: rc: 1 Jan 13 14:43:12.338: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:13.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:15.371: INFO: rc: 1 Jan 13 14:43:15.371: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:16.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:18.344: INFO: rc: 1 Jan 13 14:43:18.345: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:19.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:21.339: INFO: rc: 1 Jan 13 14:43:21.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:22.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:24.343: INFO: rc: 1 Jan 13 14:43:24.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:25.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:27.334: INFO: rc: 1 Jan 13 14:43:27.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:28.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:30.335: INFO: rc: 1 Jan 13 14:43:30.335: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:31.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:33.370: INFO: rc: 1 Jan 13 14:43:33.370: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:34.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:36.327: INFO: rc: 1 Jan 13 14:43:36.327: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:37.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:39.347: INFO: rc: 1 Jan 13 14:43:39.347: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:40.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:42.332: INFO: rc: 1 Jan 13 14:43:42.332: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:43.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:45.349: INFO: rc: 1 Jan 13 14:43:45.349: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:46.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:48.352: INFO: rc: 1 Jan 13 14:43:48.352: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:49.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:51.339: INFO: rc: 1 Jan 13 14:43:51.339: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:52.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:54.351: INFO: rc: 1 Jan 13 14:43:54.351: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:43:57.407: INFO: rc: 1 Jan 13 14:43:57.407: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:43:58.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:00.334: INFO: rc: 1 Jan 13 14:44:00.334: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:03.346: INFO: rc: 1 Jan 13 14:44:03.346: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:04.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:06.333: INFO: rc: 1 Jan 13 14:44:06.333: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:07.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:09.325: INFO: rc: 1 Jan 13 14:44:09.325: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:10.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:12.328: INFO: rc: 1 Jan 13 14:44:12.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:13.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:15.368: INFO: rc: 1 Jan 13 14:44:15.368: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:16.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:18.343: INFO: rc: 1 Jan 13 14:44:18.343: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:19.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:21.328: INFO: rc: 1 Jan 13 14:44:21.328: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:21.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:23.516: INFO: rc: 1 Jan 13 14:44:23.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-79 exec execpod-affinitytq9j9 -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80: Command stdout: stderr: + nc -zv -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 13 14:44:23.516: FAIL: Unexpected error: <*errors.errorString | 0xc002c6e5b0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc001062000, 0x56112e0, 0xc002a4edc0, 0xc000e82280, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3444 +0x62e k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3403 k8s.io/kubernetes/test/e2e/network.glob..func24.25() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2405 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003602300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003602300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003602300, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 13 14:44:23.516: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-79, will wait for the garbage collector to delete the pods Jan 13 14:44:23.717: INFO: Deleting ReplicationController affinity-clusterip took: 100.494346ms Jan 13 14:44:24.217: INFO: Terminating ReplicationController affinity-clusterip pods took: 500.29899ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:44:35.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-79" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [145.385 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 14:44:23.516: Unexpected error: <*errors.errorString | 0xc002c6e5b0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3444 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":39,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:44:35.056: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-2015 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-2015 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-2015 I0113 14:44:35.127649 16 runners.go:190] Created replication controller with name: affinity-clusterip, namespace: services-2015, replica count: 3 I0113 14:44:38.178096 16 runners.go:190] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 14:44:38.185: INFO: Creating new exec pod Jan 13 14:44:41.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2015 exec execpod-affinitymcvnz -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip 80' Jan 13 14:44:41.394: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 13 14:44:41.394: INFO: stdout: "" Jan 13 14:44:41.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2015 exec execpod-affinitymcvnz -- /bin/sh -x -c nc -zv -t -w 2 10.133.254.42 80' Jan 13 14:44:41.588: INFO: stderr: "+ nc -zv -t -w 2 10.133.254.42 80\nConnection to 10.133.254.42 80 port [tcp/http] succeeded!\n" Jan 13 14:44:41.588: INFO: stdout: "" Jan 13 14:44:41.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2015 exec execpod-affinitymcvnz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.133.254.42:80/ ; done' Jan 13 14:44:41.846: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.133.254.42:80/\n" Jan 13 14:44:41.847: INFO: stdout: "\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5\naffinity-clusterip-vmph5" Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Received response from host: affinity-clusterip-vmph5 Jan 13 14:44:41.847: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-2015, will wait for the garbage collector to delete the pods Jan 13 14:44:41.920: INFO: Deleting ReplicationController affinity-clusterip took: 6.616116ms Jan 13 14:44:42.020: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.26352ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:44:52.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2015" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":2,"skipped":39,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:44:52.781: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-3abe7087-76bb-4318-90c2-3b9547900317 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:44:54.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1004" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":41,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:44:54.924: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 13 14:44:54.978: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 13 14:44:54.983: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 13 14:44:54.995: INFO: waiting for watch events with expected annotations Jan 13 14:44:54.995: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:44:55.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-423" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":4,"skipped":63,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:44:55.099: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-d258fe1d-2aa0-4788-b706-be88e7cec239 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:44:55.137: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f15efb3c-9da3-4e61-bec4-6392bdb3bfdd" in namespace "projected-8058" to be "Succeeded or Failed" Jan 13 14:44:55.140: INFO: Pod "pod-projected-configmaps-f15efb3c-9da3-4e61-bec4-6392bdb3bfdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.385516ms Jan 13 14:44:57.144: INFO: Pod "pod-projected-configmaps-f15efb3c-9da3-4e61-bec4-6392bdb3bfdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006066716s �[1mSTEP�[0m: Saw pod success Jan 13 14:44:57.144: INFO: Pod "pod-projected-configmaps-f15efb3c-9da3-4e61-bec4-6392bdb3bfdd" satisfied condition "Succeeded or Failed" Jan 13 14:44:57.146: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod pod-projected-configmaps-f15efb3c-9da3-4e61-bec4-6392bdb3bfdd container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:44:57.166: INFO: Waiting for pod pod-projected-configmaps-f15efb3c-9da3-4e61-bec4-6392bdb3bfdd to disappear Jan 13 14:44:57.169: INFO: Pod pod-projected-configmaps-f15efb3c-9da3-4e61-bec4-6392bdb3bfdd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:44:57.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8058" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":95,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:44:57.186: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-3e93104a-73ec-41ad-9ffa-cc03d0acf3e6 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 14:44:57.223: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3a8429e4-1c81-4b79-a644-d2d055ecef0a" in namespace "projected-490" to be "Succeeded or Failed" Jan 13 14:44:57.229: INFO: Pod "pod-projected-secrets-3a8429e4-1c81-4b79-a644-d2d055ecef0a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.059757ms Jan 13 14:44:59.232: INFO: Pod "pod-projected-secrets-3a8429e4-1c81-4b79-a644-d2d055ecef0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008793392s �[1mSTEP�[0m: Saw pod success Jan 13 14:44:59.232: INFO: Pod "pod-projected-secrets-3a8429e4-1c81-4b79-a644-d2d055ecef0a" satisfied condition "Succeeded or Failed" Jan 13 14:44:59.235: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy pod pod-projected-secrets-3a8429e4-1c81-4b79-a644-d2d055ecef0a container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:44:59.260: INFO: Waiting for pod pod-projected-secrets-3a8429e4-1c81-4b79-a644-d2d055ecef0a to disappear Jan 13 14:44:59.262: INFO: Pod pod-projected-secrets-3a8429e4-1c81-4b79-a644-d2d055ecef0a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:44:59.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-490" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":100,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:44:59.287: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-74657814-f6a1-48cb-8d5f-5ab32bb9ba71 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:44:59.323: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e57abcb6-afa3-438b-87e2-7e9eaa0734c0" in namespace "projected-6982" to be "Succeeded or Failed" Jan 13 14:44:59.326: INFO: Pod "pod-projected-configmaps-e57abcb6-afa3-438b-87e2-7e9eaa0734c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.292567ms Jan 13 14:45:01.331: INFO: Pod "pod-projected-configmaps-e57abcb6-afa3-438b-87e2-7e9eaa0734c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008464957s �[1mSTEP�[0m: Saw pod success Jan 13 14:45:01.331: INFO: Pod "pod-projected-configmaps-e57abcb6-afa3-438b-87e2-7e9eaa0734c0" satisfied condition "Succeeded or Failed" Jan 13 14:45:01.335: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy pod pod-projected-configmaps-e57abcb6-afa3-438b-87e2-7e9eaa0734c0 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:45:01.360: INFO: Waiting for pod pod-projected-configmaps-e57abcb6-afa3-438b-87e2-7e9eaa0734c0 to disappear Jan 13 14:45:01.363: INFO: Pod pod-projected-configmaps-e57abcb6-afa3-438b-87e2-7e9eaa0734c0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:01.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6982" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":113,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:01.397: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name projected-secret-test-dacf6a67-b11c-4bd1-a34a-9dc2b97c01f4 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 14:45:01.441: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d05ec813-e367-474f-94aa-f4097e84da00" in namespace "projected-1411" to be "Succeeded or Failed" Jan 13 14:45:01.444: INFO: Pod "pod-projected-secrets-d05ec813-e367-474f-94aa-f4097e84da00": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495304ms Jan 13 14:45:03.448: INFO: Pod "pod-projected-secrets-d05ec813-e367-474f-94aa-f4097e84da00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007322715s �[1mSTEP�[0m: Saw pod success Jan 13 14:45:03.448: INFO: Pod "pod-projected-secrets-d05ec813-e367-474f-94aa-f4097e84da00" satisfied condition "Succeeded or Failed" Jan 13 14:45:03.451: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-projected-secrets-d05ec813-e367-474f-94aa-f4097e84da00 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:45:03.467: INFO: Waiting for pod pod-projected-secrets-d05ec813-e367-474f-94aa-f4097e84da00 to disappear Jan 13 14:45:03.470: INFO: Pod pod-projected-secrets-d05ec813-e367-474f-94aa-f4097e84da00 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:03.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1411" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":123,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:41.426: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-4454 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a new StatefulSet Jan 13 14:42:41.585: INFO: Found 0 stateful pods, waiting for 3 Jan 13 14:42:51.589: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 14:42:51.589: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 14:42:51.589: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 13 14:43:01.590: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 14:43:01.590: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 14:43:01.590: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 13 14:43:01.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4454 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 14:43:01.794: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 14:43:01.794: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 14:43:01.794: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 13 14:43:11.826: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Jan 13 14:43:21.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4454 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:43:22.004: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 13 14:43:22.004: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 14:43:22.004: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 14:43:32.025: INFO: Waiting for StatefulSet statefulset-4454/ss2 to complete update Jan 13 14:43:32.025: INFO: Waiting for Pod statefulset-4454/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 14:43:32.025: INFO: Waiting for Pod statefulset-4454/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 14:43:32.025: INFO: Waiting for Pod statefulset-4454/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 14:43:42.033: INFO: Waiting for StatefulSet statefulset-4454/ss2 to complete update Jan 13 14:43:42.033: INFO: Waiting for Pod statefulset-4454/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 14:43:42.033: INFO: Waiting for Pod statefulset-4454/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 13 14:43:52.032: INFO: Waiting for StatefulSet statefulset-4454/ss2 to complete update Jan 13 14:43:52.032: INFO: Waiting for Pod statefulset-4454/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 �[1mSTEP�[0m: Rolling back to a previous revision Jan 13 14:44:02.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4454 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 14:44:02.200: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 14:44:02.200: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 14:44:02.200: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 14:44:12.235: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Rolling back update in reverse ordinal order Jan 13 14:44:22.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-4454 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:44:22.422: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 13 14:44:22.422: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 14:44:22.422: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 14:44:42.442: INFO: Deleting all statefulset in ns statefulset-4454 Jan 13 14:44:42.444: INFO: Scaling statefulset ss2 to 0 Jan 13 14:45:12.460: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 14:45:12.464: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:12.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-4454" for this suite. �[32m• [SLOW TEST:151.062 seconds]�[0m [sig-apps] StatefulSet �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should perform rolling updates and roll backs of template modifications [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":8,"skipped":307,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:03.496: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 13 14:45:07.583: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 14:45:07.587: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 14:45:09.587: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 14:45:09.590: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 14:45:11.587: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 14:45:11.591: INFO: Pod pod-with-prestop-exec-hook still exists Jan 13 14:45:13.587: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 13 14:45:13.591: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:13.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-9397" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":134,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:12.498: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 14:45:12.535: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ab2e2b1-47c9-48a5-8c8e-d13bd5ba4722" in namespace "downward-api-4970" to be "Succeeded or Failed" Jan 13 14:45:12.538: INFO: Pod "downwardapi-volume-8ab2e2b1-47c9-48a5-8c8e-d13bd5ba4722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960977ms Jan 13 14:45:14.543: INFO: Pod "downwardapi-volume-8ab2e2b1-47c9-48a5-8c8e-d13bd5ba4722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007475103s �[1mSTEP�[0m: Saw pod success Jan 13 14:45:14.543: INFO: Pod "downwardapi-volume-8ab2e2b1-47c9-48a5-8c8e-d13bd5ba4722" satisfied condition "Succeeded or Failed" Jan 13 14:45:14.547: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod downwardapi-volume-8ab2e2b1-47c9-48a5-8c8e-d13bd5ba4722 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:45:14.564: INFO: Waiting for pod downwardapi-volume-8ab2e2b1-47c9-48a5-8c8e-d13bd5ba4722 to disappear Jan 13 14:45:14.567: INFO: Pod downwardapi-volume-8ab2e2b1-47c9-48a5-8c8e-d13bd5ba4722 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:14.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4970" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":311,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:13.637: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:15.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-7763" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":154,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:14.578: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Jan 13 14:45:14.620: INFO: Waiting up to 5m0s for pod "pod-ce9f397b-df19-4c4c-8897-8373846f3868" in namespace "emptydir-8383" to be "Succeeded or Failed" Jan 13 14:45:14.623: INFO: Pod "pod-ce9f397b-df19-4c4c-8897-8373846f3868": Phase="Pending", Reason="", readiness=false. Elapsed: 3.591109ms Jan 13 14:45:16.627: INFO: Pod "pod-ce9f397b-df19-4c4c-8897-8373846f3868": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007720569s �[1mSTEP�[0m: Saw pod success Jan 13 14:45:16.627: INFO: Pod "pod-ce9f397b-df19-4c4c-8897-8373846f3868" satisfied condition "Succeeded or Failed" Jan 13 14:45:16.630: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-ce9f397b-df19-4c4c-8897-8373846f3868 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:45:16.649: INFO: Waiting for pod pod-ce9f397b-df19-4c4c-8897-8373846f3868 to disappear Jan 13 14:45:16.652: INFO: Pod pod-ce9f397b-df19-4c4c-8897-8373846f3868 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:16.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8383" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":312,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:16.667: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 14:45:16.708: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2cfcc041-e240-4cae-978a-18f0e58fdc09" in namespace "downward-api-985" to be "Succeeded or Failed" Jan 13 14:45:16.711: INFO: Pod "downwardapi-volume-2cfcc041-e240-4cae-978a-18f0e58fdc09": Phase="Pending", Reason="", readiness=false. Elapsed: 3.381685ms Jan 13 14:45:18.715: INFO: Pod "downwardapi-volume-2cfcc041-e240-4cae-978a-18f0e58fdc09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007649001s �[1mSTEP�[0m: Saw pod success Jan 13 14:45:18.716: INFO: Pod "downwardapi-volume-2cfcc041-e240-4cae-978a-18f0e58fdc09" satisfied condition "Succeeded or Failed" Jan 13 14:45:18.719: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downwardapi-volume-2cfcc041-e240-4cae-978a-18f0e58fdc09 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:45:18.737: INFO: Waiting for pod downwardapi-volume-2cfcc041-e240-4cae-978a-18f0e58fdc09 to disappear Jan 13 14:45:18.739: INFO: Pod downwardapi-volume-2cfcc041-e240-4cae-978a-18f0e58fdc09 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:18.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-985" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":313,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:15.719: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:19.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-6705" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":168,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:19.787: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 13 14:45:19.825: INFO: Waiting up to 5m0s for pod "downward-api-45f6f6a4-facc-4a37-afc7-49083966714f" in namespace "downward-api-1096" to be "Succeeded or Failed" Jan 13 14:45:19.833: INFO: Pod "downward-api-45f6f6a4-facc-4a37-afc7-49083966714f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.540822ms Jan 13 14:45:21.838: INFO: Pod "downward-api-45f6f6a4-facc-4a37-afc7-49083966714f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013110662s �[1mSTEP�[0m: Saw pod success Jan 13 14:45:21.838: INFO: Pod "downward-api-45f6f6a4-facc-4a37-afc7-49083966714f" satisfied condition "Succeeded or Failed" Jan 13 14:45:21.842: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downward-api-45f6f6a4-facc-4a37-afc7-49083966714f container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:45:21.861: INFO: Waiting for pod downward-api-45f6f6a4-facc-4a37-afc7-49083966714f to disappear Jan 13 14:45:21.865: INFO: Pod downward-api-45f6f6a4-facc-4a37-afc7-49083966714f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:21.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1096" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":168,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:21.887: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod Jan 13 14:45:21.919: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:25.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-3221" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":13,"skipped":174,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:18.756: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-3931 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Looking for a node to schedule stateful set and pod �[1mSTEP�[0m: Creating pod with conflicting port in namespace statefulset-3931 �[1mSTEP�[0m: Creating statefulset with conflicting port in namespace statefulset-3931 �[1mSTEP�[0m: Waiting until pod test-pod will start running in namespace statefulset-3931 �[1mSTEP�[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3931 Jan 13 14:45:20.828: INFO: Observed stateful pod in namespace: statefulset-3931, name: ss-0, uid: dd0ca718-f37b-4250-9c5b-7a2214151e07, status phase: Pending. Waiting for statefulset controller to delete. Jan 13 14:45:21.419: INFO: Observed stateful pod in namespace: statefulset-3931, name: ss-0, uid: dd0ca718-f37b-4250-9c5b-7a2214151e07, status phase: Failed. Waiting for statefulset controller to delete. Jan 13 14:45:21.427: INFO: Observed stateful pod in namespace: statefulset-3931, name: ss-0, uid: dd0ca718-f37b-4250-9c5b-7a2214151e07, status phase: Failed. Waiting for statefulset controller to delete. Jan 13 14:45:21.432: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3931 �[1mSTEP�[0m: Removing pod with conflicting port in namespace statefulset-3931 �[1mSTEP�[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3931 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 14:45:23.454: INFO: Deleting all statefulset in ns statefulset-3931 Jan 13 14:45:23.457: INFO: Scaling statefulset ss to 0 Jan 13 14:45:33.471: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 14:45:33.474: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:33.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-3931" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":12,"skipped":317,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:25.686: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-2124 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 13 14:45:25.715: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 13 14:45:25.758: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 14:45:27.761: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:29.761: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:31.761: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:33.768: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:35.762: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:37.762: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:39.762: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:41.763: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 14:45:43.761: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 13 14:45:43.767: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 13 14:45:45.770: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 13 14:45:45.776: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 13 14:45:45.782: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 13 14:45:47.797: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 13 14:45:47.797: INFO: Breadth first check of 192.168.1.13 on host 172.18.0.7... Jan 13 14:45:47.799: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.35:9080/dial?request=hostname&protocol=udp&host=192.168.1.13&port=8081&tries=1'] Namespace:pod-network-test-2124 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 14:45:47.799: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:45:47.886: INFO: Waiting for responses: map[] Jan 13 14:45:47.886: INFO: reached 192.168.1.13 after 0/1 tries Jan 13 14:45:47.886: INFO: Breadth first check of 192.168.0.31 on host 172.18.0.4... Jan 13 14:45:47.889: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.35:9080/dial?request=hostname&protocol=udp&host=192.168.0.31&port=8081&tries=1'] Namespace:pod-network-test-2124 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 14:45:47.889: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:45:47.964: INFO: Waiting for responses: map[] Jan 13 14:45:47.964: INFO: reached 192.168.0.31 after 0/1 tries Jan 13 14:45:47.964: INFO: Breadth first check of 192.168.2.13 on host 172.18.0.6... Jan 13 14:45:47.967: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.35:9080/dial?request=hostname&protocol=udp&host=192.168.2.13&port=8081&tries=1'] Namespace:pod-network-test-2124 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 14:45:47.967: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:45:48.039: INFO: Waiting for responses: map[] Jan 13 14:45:48.039: INFO: reached 192.168.2.13 after 0/1 tries Jan 13 14:45:48.039: INFO: Breadth first check of 192.168.6.13 on host 172.18.0.5... Jan 13 14:45:48.042: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.35:9080/dial?request=hostname&protocol=udp&host=192.168.6.13&port=8081&tries=1'] Namespace:pod-network-test-2124 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 14:45:48.042: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 14:45:48.114: INFO: Waiting for responses: map[] Jan 13 14:45:48.114: INFO: reached 192.168.6.13 after 0/1 tries Jan 13 14:45:48.114: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:48.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-2124" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":177,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:48.149: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:45:48.177: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:48.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-4232" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":15,"skipped":195,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:48.734: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:45:50.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-1598" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":204,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:43:12.465: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating all guestbook components Jan 13 14:43:12.498: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 13 14:43:12.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 create -f -' Jan 13 14:43:12.749: INFO: stderr: "" Jan 13 14:43:12.749: INFO: stdout: "service/agnhost-replica created\n" Jan 13 14:43:12.749: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 13 14:43:12.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 create -f -' Jan 13 14:43:13.020: INFO: stderr: "" Jan 13 14:43:13.020: INFO: stdout: "service/agnhost-primary created\n" Jan 13 14:43:13.020: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 13 14:43:13.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 create -f -' Jan 13 14:43:13.300: INFO: stderr: "" Jan 13 14:43:13.300: INFO: stdout: "service/frontend created\n" Jan 13 14:43:13.300: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 13 14:43:13.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 create -f -' Jan 13 14:43:13.563: INFO: stderr: "" Jan 13 14:43:13.563: INFO: stdout: "deployment.apps/frontend created\n" Jan 13 14:43:13.563: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 14:43:13.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 create -f -' Jan 13 14:43:13.796: INFO: stderr: "" Jan 13 14:43:13.796: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 13 14:43:13.796: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 14:43:13.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 create -f -' Jan 13 14:43:14.051: INFO: stderr: "" Jan 13 14:43:14.051: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 13 14:43:14.051: INFO: Waiting for all frontend pods to be Running. Jan 13 14:43:19.101: INFO: Waiting for frontend to serve content. Jan 13 14:43:19.110: INFO: Trying to add a new entry to the guestbook. Jan 13 14:46:52.239: INFO: Failed to get response from guestbook. err: an error on the server ("unknown") has prevented the request from succeeding (get services frontend), response: k8s� � �v1��Status��� � �������Failure�herror trying to reach service: read tcp 172.18.0.9:60580->192.168.2.9:80: read: connection reset by peer"�0����"� Jan 13 14:46:57.239: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 +0x159 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0022ab200, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:46:57.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 delete --grace-period=0 --force -f -' Jan 13 14:46:57.370: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:46:57.370: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:46:57.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 delete --grace-period=0 --force -f -' Jan 13 14:46:57.502: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:46:57.502: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:46:57.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 delete --grace-period=0 --force -f -' Jan 13 14:46:57.622: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:46:57.622: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:46:57.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 delete --grace-period=0 --force -f -' Jan 13 14:46:57.719: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:46:57.719: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:46:57.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 delete --grace-period=0 --force -f -' Jan 13 14:46:57.828: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:46:57.828: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:46:57.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8592 delete --grace-period=0 --force -f -' Jan 13 14:46:57.991: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:46:57.991: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:46:57.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8592" for this suite. �[91m�[1m• Failure [225.538 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Guestbook application �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342�[0m �[91m�[1mshould create and stop a working application [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 14:46:57.240: Cannot added new entry in 180 seconds.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:42:32.706: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2663.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2663.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2663.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2663.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 14:46:15.379: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-2663.svc.cluster.local from pod dns-2663/dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c: an error on the server ("unknown") has prevented the request from succeeding (get pods dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c) Jan 13 14:47:40.766: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-2663.svc.cluster.local from pod dns-2663/dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2663/pods/dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c/proxy/results/jessie_udp@dns-test-service-3.dns-2663.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003daa628, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003214120, 0xc003daa628, 0xc003214120, 0xc003daa628) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003daa628, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003daa918, 0x2, 0x2, 0x4dccbe5, 0x7, 0xc002e6f800, 0x56112e0, 0xc002f1e580, 0x1, 0x4decde7, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc001002420, 0xc002e6f800, 0xc003daa918, 0x2, 0x2, 0x4decde7, 0x10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:549 +0x365 k8s.io/kubernetes/test/e2e/network.glob..func2.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:353 +0x6fa k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000c36180, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 E0113 14:47:40.767241 14 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 13 14:47:40.766: Unable to read jessie_udp@dns-test-service-3.dns-2663.svc.cluster.local from pod dns-2663/dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-2663/pods/dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c/proxy/results/jessie_udp@dns-test-service-3.dns-2663.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003daa628, 0xcb0200, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003214120, 0xc003daa628, 0xc003214120, 0xc003daa628)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003daa628, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003daa918, 0x2, 0x2, 0x4dccbe5, 0x7, 0xc002e6f800, 0x56112e0, 0xc002f1e580, 0x1, 0x4decde7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc001002420, 0xc002e6f800, 0xc003daa918, 0x2, 0x2, 0x4decde7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:549 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:353 +0x6fa\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc000c36180, 0x4fc9940)\n\t/usr/local/go/src/testing/testing.go:1123 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1168 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 119 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x499f1e0, 0xc002ed2180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x499f1e0, 0xc002ed2180) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc002dd2180, 0x16b, 0x77a462c, 0x7d, 0xd3, 0xc001c8a800, 0x771) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc002dd2180, 0x16b, 0xc003daa0d0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc002dd2180, 0x16b, 0xc003daa1b8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0x4e68bfb, 0x24, 0xc003daa418, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:481 +0xa6d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003daa628, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc003214120, 0xc003daa628, 0xc003214120, 0xc003daa628) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003daa628, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003daa918, 0x2, 0x2, 0x4dccbe5, 0x7, 0xc002e6f800, 0x56112e0, 0xc002f1e580, 0x1, 0x4decde7, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc001002420, 0xc002e6f800, 0xc003daa918, 0x2, 0x2, 0x4decde7, 0x10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:549 +0x365 k8s.io/kubernetes/test/e2e/network.glob..func2.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:353 +0x6fa k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000f4bda0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000f4bda0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000f94ac0, 0x54fc2e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001752c30, 0x0, 0x54fc2e0, 0xc0001d28c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001752c30, 0x54fc2e0, 0xc0001d28c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001ff4000, 0xc001752c30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001ff4000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001ff4000, 0xc001ff0030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000180280, 0x7fc7cc1785f8, 0xc000c36180, 0x4e003e0, 0x14, 0xc002f70720, 0x3, 0x3, 0x55b68a0, 0xc0001d28c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x5500f20, 0xc000c36180, 0x4e003e0, 0x14, 0xc002e95540, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x5500f20, 0xc000c36180, 0x4e003e0, 0x14, 0xc002f32880, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000c36180, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:47:40.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2663" for this suite. �[91m�[1m• Failure [308.092 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould provide DNS for ExternalName services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 14:47:40.766: Unable to read jessie_udp@dns-test-service-3.dns-2663.svc.cluster.local from pod dns-2663/dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-2663/pods/dns-test-55a0a5a3-8c9d-45e3-a00e-a9de725e235c/proxy/results/jessie_udp@dns-test-service-3.dns-2663.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:50.822: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod busybox-3e6dc368-3081-4269-9914-e6c6b033e886 in namespace container-probe-3884 Jan 13 14:45:52.873: INFO: Started pod busybox-3e6dc368-3081-4269-9914-e6c6b033e886 in namespace container-probe-3884 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 13 14:45:52.876: INFO: Initial restart count of pod busybox-3e6dc368-3081-4269-9914-e6c6b033e886 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:49:53.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3884" for this suite. �[32m• [SLOW TEST:242.650 seconds]�[0m [k8s.io] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":225,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:49:53.482: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 14:49:54.045: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 14:49:56.061: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809218194, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809218194, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809218194, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809218194, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 14:49:59.091: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:49:59.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9706" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9706-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":18,"skipped":227,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:49:59.259: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-9f920cf5-82fa-4cbe-8c16-197b4b0cf8a1 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 14:49:59.346: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-52607c41-7747-43e8-88a7-22cd0e2a4745" in namespace "projected-7906" to be "Succeeded or Failed" Jan 13 14:49:59.355: INFO: Pod "pod-projected-secrets-52607c41-7747-43e8-88a7-22cd0e2a4745": Phase="Pending", Reason="", readiness=false. Elapsed: 8.92663ms Jan 13 14:50:01.362: INFO: Pod "pod-projected-secrets-52607c41-7747-43e8-88a7-22cd0e2a4745": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0153803s �[1mSTEP�[0m: Saw pod success Jan 13 14:50:01.362: INFO: Pod "pod-projected-secrets-52607c41-7747-43e8-88a7-22cd0e2a4745" satisfied condition "Succeeded or Failed" Jan 13 14:50:01.368: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-projected-secrets-52607c41-7747-43e8-88a7-22cd0e2a4745 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:50:01.429: INFO: Waiting for pod pod-projected-secrets-52607c41-7747-43e8-88a7-22cd0e2a4745 to disappear Jan 13 14:50:01.437: INFO: Pod pod-projected-secrets-52607c41-7747-43e8-88a7-22cd0e2a4745 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:01.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7906" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":227,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:01.469: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 13 14:50:01.543: INFO: Waiting up to 5m0s for pod "downward-api-cda62790-02b7-49c3-b2d3-44aa652647d6" in namespace "downward-api-2149" to be "Succeeded or Failed" Jan 13 14:50:01.547: INFO: Pod "downward-api-cda62790-02b7-49c3-b2d3-44aa652647d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.641723ms Jan 13 14:50:03.552: INFO: Pod "downward-api-cda62790-02b7-49c3-b2d3-44aa652647d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009064204s �[1mSTEP�[0m: Saw pod success Jan 13 14:50:03.553: INFO: Pod "downward-api-cda62790-02b7-49c3-b2d3-44aa652647d6" satisfied condition "Succeeded or Failed" Jan 13 14:50:03.557: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downward-api-cda62790-02b7-49c3-b2d3-44aa652647d6 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:50:03.594: INFO: Waiting for pod downward-api-cda62790-02b7-49c3-b2d3-44aa652647d6 to disappear Jan 13 14:50:03.599: INFO: Pod downward-api-cda62790-02b7-49c3-b2d3-44aa652647d6 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:03.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2149" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":228,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:03.620: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-119d09a6-f4d2-4632-bd39-f5bf273c7396 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 14:50:03.705: INFO: Waiting up to 5m0s for pod "pod-secrets-f1d73cbe-96af-44e4-8be2-f7277f14d117" in namespace "secrets-3428" to be "Succeeded or Failed" Jan 13 14:50:03.711: INFO: Pod "pod-secrets-f1d73cbe-96af-44e4-8be2-f7277f14d117": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087034ms Jan 13 14:50:05.718: INFO: Pod "pod-secrets-f1d73cbe-96af-44e4-8be2-f7277f14d117": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013044476s �[1mSTEP�[0m: Saw pod success Jan 13 14:50:05.718: INFO: Pod "pod-secrets-f1d73cbe-96af-44e4-8be2-f7277f14d117" satisfied condition "Succeeded or Failed" Jan 13 14:50:05.722: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-secrets-f1d73cbe-96af-44e4-8be2-f7277f14d117 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:50:05.748: INFO: Waiting for pod pod-secrets-f1d73cbe-96af-44e4-8be2-f7277f14d117 to disappear Jan 13 14:50:05.754: INFO: Pod pod-secrets-f1d73cbe-96af-44e4-8be2-f7277f14d117 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:05.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-3428" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":228,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:05.910: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a replication controller Jan 13 14:50:05.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 create -f -' Jan 13 14:50:06.513: INFO: stderr: "" Jan 13 14:50:06.513: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 13 14:50:06.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 14:50:06.761: INFO: stderr: "" Jan 13 14:50:06.761: INFO: stdout: "update-demo-nautilus-dhwpj update-demo-nautilus-g6n9g " Jan 13 14:50:06.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods update-demo-nautilus-dhwpj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 14:50:06.961: INFO: stderr: "" Jan 13 14:50:06.961: INFO: stdout: "" Jan 13 14:50:06.961: INFO: update-demo-nautilus-dhwpj is created but not running Jan 13 14:50:11.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 14:50:12.139: INFO: stderr: "" Jan 13 14:50:12.139: INFO: stdout: "update-demo-nautilus-dhwpj update-demo-nautilus-g6n9g " Jan 13 14:50:12.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods update-demo-nautilus-dhwpj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 14:50:12.312: INFO: stderr: "" Jan 13 14:50:12.312: INFO: stdout: "true" Jan 13 14:50:12.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods update-demo-nautilus-dhwpj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 14:50:12.484: INFO: stderr: "" Jan 13 14:50:12.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 14:50:12.484: INFO: validating pod update-demo-nautilus-dhwpj Jan 13 14:50:12.493: INFO: got data: { "image": "nautilus.jpg" } Jan 13 14:50:12.493: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 14:50:12.493: INFO: update-demo-nautilus-dhwpj is verified up and running Jan 13 14:50:12.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods update-demo-nautilus-g6n9g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 14:50:12.676: INFO: stderr: "" Jan 13 14:50:12.677: INFO: stdout: "true" Jan 13 14:50:12.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods update-demo-nautilus-g6n9g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 14:50:12.844: INFO: stderr: "" Jan 13 14:50:12.844: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 14:50:12.844: INFO: validating pod update-demo-nautilus-g6n9g Jan 13 14:50:12.854: INFO: got data: { "image": "nautilus.jpg" } Jan 13 14:50:12.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 14:50:12.854: INFO: update-demo-nautilus-g6n9g is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:50:12.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 delete --grace-period=0 --force -f -' Jan 13 14:50:13.065: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:50:13.065: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 13 14:50:13.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get rc,svc -l name=update-demo --no-headers' Jan 13 14:50:13.371: INFO: stderr: "No resources found in kubectl-2008 namespace.\n" Jan 13 14:50:13.371: INFO: stdout: "" Jan 13 14:50:13.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2008 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 14:50:13.626: INFO: stderr: "" Jan 13 14:50:13.626: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:13.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2008" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":22,"skipped":279,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:13.663: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 14:50:13.733: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0b54a2c-22a0-4073-91a0-72f2c89ef289" in namespace "downward-api-1050" to be "Succeeded or Failed" Jan 13 14:50:13.738: INFO: Pod "downwardapi-volume-f0b54a2c-22a0-4073-91a0-72f2c89ef289": Phase="Pending", Reason="", readiness=false. Elapsed: 5.331702ms Jan 13 14:50:15.747: INFO: Pod "downwardapi-volume-f0b54a2c-22a0-4073-91a0-72f2c89ef289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013997509s �[1mSTEP�[0m: Saw pod success Jan 13 14:50:15.747: INFO: Pod "downwardapi-volume-f0b54a2c-22a0-4073-91a0-72f2c89ef289" satisfied condition "Succeeded or Failed" Jan 13 14:50:15.753: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy pod downwardapi-volume-f0b54a2c-22a0-4073-91a0-72f2c89ef289 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:50:15.796: INFO: Waiting for pod downwardapi-volume-f0b54a2c-22a0-4073-91a0-72f2c89ef289 to disappear Jan 13 14:50:15.800: INFO: Pod downwardapi-volume-f0b54a2c-22a0-4073-91a0-72f2c89ef289 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:15.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1050" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":280,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:16.100: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: starting the proxy server Jan 13 14:50:16.149: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6305 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:16.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6305" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":24,"skipped":385,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:16.349: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 13 14:50:22.449: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 14:50:22.455: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 14:50:24.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 14:50:24.461: INFO: Pod pod-with-prestop-http-hook still exists Jan 13 14:50:26.455: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 13 14:50:26.462: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:26.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-6047" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":391,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:26.544: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Create set of pods Jan 13 14:50:26.624: INFO: created test-pod-1 Jan 13 14:50:26.631: INFO: created test-pod-2 Jan 13 14:50:26.641: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be located �[1mSTEP�[0m: waiting for all pods to be deleted [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:50:26.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5851" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should delete a collection of pods [Conformance]","total":-1,"completed":26,"skipped":406,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:45:33.514: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics W0113 14:46:13.577128 17 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 14:51:13.581: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. Jan 13 14:51:13.586: INFO: Deleting pod "simpletest.rc-4ccjq" in namespace "gc-5478" Jan 13 14:51:13.597: INFO: Deleting pod "simpletest.rc-6kqfp" in namespace "gc-5478" Jan 13 14:51:13.619: INFO: Deleting pod "simpletest.rc-9dkvm" in namespace "gc-5478" Jan 13 14:51:13.637: INFO: Deleting pod "simpletest.rc-dhnq7" in namespace "gc-5478" Jan 13 14:51:13.668: INFO: Deleting pod "simpletest.rc-gfkwl" in namespace "gc-5478" Jan 13 14:51:13.698: INFO: Deleting pod "simpletest.rc-gn24b" in namespace "gc-5478" Jan 13 14:51:13.728: INFO: Deleting pod "simpletest.rc-n6q5m" in namespace "gc-5478" Jan 13 14:51:13.769: INFO: Deleting pod "simpletest.rc-xkmql" in namespace "gc-5478" Jan 13 14:51:13.799: INFO: Deleting pod "simpletest.rc-xw9gh" in namespace "gc-5478" Jan 13 14:51:13.849: INFO: Deleting pod "simpletest.rc-zjprb" in namespace "gc-5478" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:13.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-5478" for this suite. �[32m• [SLOW TEST:340.476 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should orphan pods created by rc if delete options say so [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":13,"skipped":326,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:14.631: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 14:51:14.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb" in namespace "downward-api-4277" to be "Succeeded or Failed" Jan 13 14:51:14.742: INFO: Pod "downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb": Phase="Pending", Reason="", readiness=false. Elapsed: 9.733902ms Jan 13 14:51:16.750: INFO: Pod "downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb": Phase="Running", Reason="", readiness=true. Elapsed: 2.018248785s Jan 13 14:51:18.755: INFO: Pod "downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022968512s �[1mSTEP�[0m: Saw pod success Jan 13 14:51:18.755: INFO: Pod "downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb" satisfied condition "Succeeded or Failed" Jan 13 14:51:18.760: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:51:18.784: INFO: Waiting for pod downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb to disappear Jan 13 14:51:18.788: INFO: Pod downwardapi-volume-a269cb3d-283e-4f72-a864-9bd6c71e17cb no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:18.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4277" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":464,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:18.833: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Jan 13 14:51:18.904: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:18.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-1004" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":15,"skipped":478,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:18.958: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 13 14:51:19.028: INFO: Waiting up to 5m0s for pod "downward-api-8ce6b743-2b9b-42cf-9702-7552e716338f" in namespace "downward-api-907" to be "Succeeded or Failed" Jan 13 14:51:19.033: INFO: Pod "downward-api-8ce6b743-2b9b-42cf-9702-7552e716338f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.769744ms Jan 13 14:51:21.039: INFO: Pod "downward-api-8ce6b743-2b9b-42cf-9702-7552e716338f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010344929s �[1mSTEP�[0m: Saw pod success Jan 13 14:51:21.039: INFO: Pod "downward-api-8ce6b743-2b9b-42cf-9702-7552e716338f" satisfied condition "Succeeded or Failed" Jan 13 14:51:21.042: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downward-api-8ce6b743-2b9b-42cf-9702-7552e716338f container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:51:21.067: INFO: Waiting for pod downward-api-8ce6b743-2b9b-42cf-9702-7552e716338f to disappear Jan 13 14:51:21.071: INFO: Pod downward-api-8ce6b743-2b9b-42cf-9702-7552e716338f no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:21.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-907" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":485,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:21.114: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:21.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-6883" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":495,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:21.228: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1314 �[1mSTEP�[0m: creating the pod Jan 13 14:51:21.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 create -f -' Jan 13 14:51:21.850: INFO: stderr: "" Jan 13 14:51:21.850: INFO: stdout: "pod/pause created\n" Jan 13 14:51:21.850: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 13 14:51:21.850: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1034" to be "running and ready" Jan 13 14:51:21.855: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.421393ms Jan 13 14:51:23.859: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.008787225s Jan 13 14:51:23.859: INFO: Pod "pause" satisfied condition "running and ready" Jan 13 14:51:23.859: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: adding the label testing-label with value testing-label-value to a pod Jan 13 14:51:23.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 label pods pause testing-label=testing-label-value' Jan 13 14:51:24.082: INFO: stderr: "" Jan 13 14:51:24.082: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod has the label testing-label with the value testing-label-value Jan 13 14:51:24.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 get pod pause -L testing-label' Jan 13 14:51:24.281: INFO: stderr: "" Jan 13 14:51:24.281: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s testing-label-value\n" �[1mSTEP�[0m: removing the label testing-label of a pod Jan 13 14:51:24.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 label pods pause testing-label-' Jan 13 14:51:24.477: INFO: stderr: "" Jan 13 14:51:24.477: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod doesn't have the label testing-label Jan 13 14:51:24.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 get pod pause -L testing-label' Jan 13 14:51:24.645: INFO: stderr: "" Jan 13 14:51:24.645: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 3s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1320 �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:51:24.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 delete --grace-period=0 --force -f -' Jan 13 14:51:24.825: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:51:24.826: INFO: stdout: "pod \"pause\" force deleted\n" Jan 13 14:51:24.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 get rc,svc -l name=pause --no-headers' Jan 13 14:51:25.032: INFO: stderr: "No resources found in kubectl-1034 namespace.\n" Jan 13 14:51:25.032: INFO: stdout: "" Jan 13 14:51:25.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1034 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 14:51:25.236: INFO: stderr: "" Jan 13 14:51:25.236: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:25.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1034" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":18,"skipped":500,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:25.292: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:148 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 13 14:51:25.373: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 13 14:51:25.387: INFO: waiting for watch events with expected annotations Jan 13 14:51:25.387: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:25.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-2030" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":19,"skipped":517,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:25.450: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 13 14:51:25.504: INFO: Waiting up to 5m0s for pod "pod-323bedf7-6511-4975-9d4d-cbedb3d90b46" in namespace "emptydir-2910" to be "Succeeded or Failed" Jan 13 14:51:25.509: INFO: Pod "pod-323bedf7-6511-4975-9d4d-cbedb3d90b46": Phase="Pending", Reason="", readiness=false. Elapsed: 4.928863ms Jan 13 14:51:27.514: INFO: Pod "pod-323bedf7-6511-4975-9d4d-cbedb3d90b46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009888655s �[1mSTEP�[0m: Saw pod success Jan 13 14:51:27.515: INFO: Pod "pod-323bedf7-6511-4975-9d4d-cbedb3d90b46" satisfied condition "Succeeded or Failed" Jan 13 14:51:27.518: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-323bedf7-6511-4975-9d4d-cbedb3d90b46 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:51:27.539: INFO: Waiting for pod pod-323bedf7-6511-4975-9d4d-cbedb3d90b46 to disappear Jan 13 14:51:27.543: INFO: Pod pod-323bedf7-6511-4975-9d4d-cbedb3d90b46 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:27.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2910" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":518,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:27.608: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:51:27.657: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Jan 13 14:51:29.708: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:30.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-7515" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":21,"skipped":540,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:30.743: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:51:30.792: INFO: Creating deployment "test-recreate-deployment" Jan 13 14:51:30.797: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 13 14:51:30.807: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 13 14:51:32.817: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 13 14:51:32.823: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 13 14:51:32.839: INFO: Updating deployment test-recreate-deployment Jan 13 14:51:32.839: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 14:51:32.988: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-2702 f26203e7-98e7-42ee-86b6-0ccc67282d35 7054 2 2023-01-13 14:51:30 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-13 14:51:32 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update apps/v1 2023-01-13 14:51:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0036bfb38 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-13 14:51:32 +0000 UTC,LastTransitionTime:2023-01-13 14:51:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-f79dd4667" is progressing.,LastUpdateTime:2023-01-13 14:51:32 +0000 UTC,LastTransitionTime:2023-01-13 14:51:30 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 13 14:51:32.994: INFO: New ReplicaSet "test-recreate-deployment-f79dd4667" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-f79dd4667 deployment-2702 d0e2049c-b6d6-470b-944a-ae17e6e08676 7052 1 2023-01-13 14:51:32 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment f26203e7-98e7-42ee-86b6-0ccc67282d35 0xc0030cede0 0xc0030cede1}] [] [{kube-controller-manager Update apps/v1 2023-01-13 14:51:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f26203e7-98e7-42ee-86b6-0ccc67282d35\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: f79dd4667,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0030cee58 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 14:51:32.994: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 13 14:51:32.995: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-786dd7c454 deployment-2702 4f44b485-9237-48ac-babe-16988f1ecc4f 7044 2 2023-01-13 14:51:30 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:786dd7c454] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment f26203e7-98e7-42ee-86b6-0ccc67282d35 0xc0030cecf7 0xc0030cecf8}] [] [{kube-controller-manager Update apps/v1 2023-01-13 14:51:32 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f26203e7-98e7-42ee-86b6-0ccc67282d35\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 786dd7c454,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:786dd7c454] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0030ced88 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 14:51:33.003: INFO: Pod "test-recreate-deployment-f79dd4667-nhznh" is not available: &Pod{ObjectMeta:{test-recreate-deployment-f79dd4667-nhznh test-recreate-deployment-f79dd4667- deployment-2702 f556e25a-f189-4b8f-930b-d275ce8fefe8 7055 0 2023-01-13 14:51:32 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:f79dd4667] map[] [{apps/v1 ReplicaSet test-recreate-deployment-f79dd4667 d0e2049c-b6d6-470b-944a-ae17e6e08676 0xc0036bfe70 0xc0036bfe71}] [] [{kube-controller-manager Update v1 2023-01-13 14:51:32 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d0e2049c-b6d6-470b-944a-ae17e6e08676\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-13 14:51:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b8dcd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b8dcd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b8dcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:51:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:51:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:51:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 14:51:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-13 14:51:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:33.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2702" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":22,"skipped":544,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:33.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Jan 13 14:51:36.140: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:51:37.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-7418" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":23,"skipped":550,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:51:37.191: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status captures configMap creation �[1mSTEP�[0m: Deleting a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:52:05.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-2654" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":24,"skipped":557,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:52:05.388: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 13 14:52:07.986: INFO: Successfully updated pod "pod-update-activedeadlineseconds-47856c06-9052-4231-ae3a-7e2d1d7a49bb" Jan 13 14:52:07.986: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-47856c06-9052-4231-ae3a-7e2d1d7a49bb" in namespace "pods-8517" to be "terminated due to deadline exceeded" Jan 13 14:52:07.992: INFO: Pod "pod-update-activedeadlineseconds-47856c06-9052-4231-ae3a-7e2d1d7a49bb": Phase="Running", Reason="", readiness=true. Elapsed: 5.508542ms Jan 13 14:52:09.998: INFO: Pod "pod-update-activedeadlineseconds-47856c06-9052-4231-ae3a-7e2d1d7a49bb": Phase="Running", Reason="", readiness=true. Elapsed: 2.012177498s Jan 13 14:52:12.004: INFO: Pod "pod-update-activedeadlineseconds-47856c06-9052-4231-ae3a-7e2d1d7a49bb": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.017838445s Jan 13 14:52:12.004: INFO: Pod "pod-update-activedeadlineseconds-47856c06-9052-4231-ae3a-7e2d1d7a49bb" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:52:12.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8517" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":571,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:52:12.069: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test substitution in container's args Jan 13 14:52:12.131: INFO: Waiting up to 5m0s for pod "var-expansion-6e139cd1-d565-4fac-9b30-bbd3d03869e3" in namespace "var-expansion-1053" to be "Succeeded or Failed" Jan 13 14:52:12.136: INFO: Pod "var-expansion-6e139cd1-d565-4fac-9b30-bbd3d03869e3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.9838ms Jan 13 14:52:14.142: INFO: Pod "var-expansion-6e139cd1-d565-4fac-9b30-bbd3d03869e3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010095292s �[1mSTEP�[0m: Saw pod success Jan 13 14:52:14.142: INFO: Pod "var-expansion-6e139cd1-d565-4fac-9b30-bbd3d03869e3" satisfied condition "Succeeded or Failed" Jan 13 14:52:14.147: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod var-expansion-6e139cd1-d565-4fac-9b30-bbd3d03869e3 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:52:14.175: INFO: Waiting for pod var-expansion-6e139cd1-d565-4fac-9b30-bbd3d03869e3 to disappear Jan 13 14:52:14.180: INFO: Pod var-expansion-6e139cd1-d565-4fac-9b30-bbd3d03869e3 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:52:14.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1053" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":588,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":6,"skipped":86,"failed":1,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:47:40.800: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8655.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8655.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8655.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8655.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 14:51:16.436: INFO: Unable to read wheezy_udp@dns-test-service-3.dns-8655.svc.cluster.local from pod dns-8655/dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7: an error on the server ("unknown") has prevented the request from succeeding (get pods dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7) Jan 13 14:52:42.862: FAIL: Unable to read jessie_udp@dns-test-service-3.dns-8655.svc.cluster.local from pod dns-8655/dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8655/pods/dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7/proxy/results/jessie_udp@dns-test-service-3.dns-8655.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003daa628, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0029ed200, 0xc003daa628, 0xc0029ed200, 0xc003daa628) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003daa628, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003daa918, 0x2, 0x2, 0x4dccbe5, 0x7, 0xc00282a800, 0x56112e0, 0xc0033031e0, 0x1, 0x4decde7, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc001002420, 0xc00282a800, 0xc003daa918, 0x2, 0x2, 0x4decde7, 0x10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:549 +0x365 k8s.io/kubernetes/test/e2e/network.glob..func2.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:353 +0x6fa k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000c36180, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 E0113 14:52:42.864248 14 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 13 14:52:42.862: Unable to read jessie_udp@dns-test-service-3.dns-8655.svc.cluster.local from pod dns-8655/dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-8655/pods/dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7/proxy/results/jessie_udp@dns-test-service-3.dns-8655.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:211, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003daa628, 0xcb0200, 0x0, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0029ed200, 0xc003daa628, 0xc0029ed200, 0xc003daa628)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003daa628, 0x4a, 0x0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d\nk8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003daa918, 0x2, 0x2, 0x4dccbe5, 0x7, 0xc00282a800, 0x56112e0, 0xc0033031e0, 0x1, 0x4decde7, ...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158\nk8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc001002420, 0xc00282a800, 0xc003daa918, 0x2, 0x2, 0x4decde7, 0x10)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:549 +0x365\nk8s.io/kubernetes/test/e2e/network.glob..func2.9()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:353 +0x6fa\nk8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c\nk8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b\ntesting.tRunner(0xc000c36180, 0x4fc9940)\n\t/usr/local/go/src/testing/testing.go:1123 +0xef\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1168 +0x2b3"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 119 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x499f1e0, 0xc00322a180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x89 panic(0x499f1e0, 0xc00322a180) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1(0xc0024f7200, 0x16b, 0x77a462c, 0x7d, 0xd3, 0xc00192f000, 0x771) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0xa5 panic(0x41905e0, 0x5431f10) /usr/local/go/src/runtime/panic.go:969 +0x1b9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail(0xc0024f7200, 0x16b, 0xc003daa0d0, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:267 +0xc8 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail(0xc0024f7200, 0x16b, 0xc003daa1b8, 0x1, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1b5 k8s.io/kubernetes/test/e2e/framework.Failf(0x4e68bfb, 0x24, 0xc003daa418, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x219 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1(0x0, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:481 +0xa6d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtection(0xc003daa628, 0xcb0200, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 +0x69 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0029ed200, 0xc003daa628, 0xc0029ed200, 0xc003daa628) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:445 +0x2f k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x12a05f200, 0x8bb2c97000, 0xc003daa628, 0x4a, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:441 +0x4d k8s.io/kubernetes/test/e2e/network.assertFilesContain(0xc003daa918, 0x2, 0x2, 0x4dccbe5, 0x7, 0xc00282a800, 0x56112e0, 0xc0033031e0, 0x1, 0x4decde7, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x158 k8s.io/kubernetes/test/e2e/network.validateTargetedProbeOutput(0xc001002420, 0xc00282a800, 0xc003daa918, 0x2, 0x2, 0x4decde7, 0x10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:549 +0x365 k8s.io/kubernetes/test/e2e/network.glob..func2.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:353 +0x6fa k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000f4bda0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xa3 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000f4bda0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x15c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000f94ac0, 0x54fc2e0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x87 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001752c30, 0x0, 0x54fc2e0, 0xc0001d28c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x72f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001752c30, 0x54fc2e0, 0xc0001d28c0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc001ff4000, 0xc001752c30, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x111 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc001ff4000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x127 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc001ff4000, 0xc001ff0030) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc000180280, 0x7fc7cc1785f8, 0xc000c36180, 0x4e003e0, 0x14, 0xc002f70720, 0x3, 0x3, 0x55b68a0, 0xc0001d28c0, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x426 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x5500f20, 0xc000c36180, 0x4e003e0, 0x14, 0xc002e95540, 0x3, 0x4, 0x4) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:226 +0x238 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x5500f20, 0xc000c36180, 0x4e003e0, 0x14, 0xc002f32880, 0x2, 0x2, 0x25) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:214 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000c36180, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:52:42.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8655" for this suite. �[91m�[1m• Failure [302.116 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould provide DNS for ExternalName services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 14:52:42.862: Unable to read jessie_udp@dns-test-service-3.dns-8655.svc.cluster.local from pod dns-8655/dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-8655/pods/dns-test-e515f600-f058-41e9-ab73-ac7b137de7a7/proxy/results/jessie_udp@dns-test-service-3.dns-8655.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:211 �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:52:14.212: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:53:14.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9578" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":591,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:53:14.391: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption is created �[1mSTEP�[0m: When a replication controller with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:53:17.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-8370" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":28,"skipped":631,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:53:17.632: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:53:17.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8936" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":29,"skipped":674,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:53:17.818: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:53:17.875: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:53:19.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-6661" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":30,"skipped":698,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":6,"skipped":86,"failed":2,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:52:42.921: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7988.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7988.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 14:52:47.032: INFO: DNS probes using dns-test-fa164bb1-0127-49c6-a80c-a55b84e3ff2d succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7988.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7988.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 14:52:51.109: INFO: File wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:52:51.116: INFO: File jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:52:51.116: INFO: Lookups using dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 failed for: [wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local] Jan 13 14:52:56.129: INFO: File wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:52:56.135: INFO: File jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:52:56.135: INFO: Lookups using dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 failed for: [wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local] Jan 13 14:53:01.123: INFO: File wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:53:01.128: INFO: File jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:53:01.128: INFO: Lookups using dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 failed for: [wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local] Jan 13 14:53:06.123: INFO: File wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:53:06.127: INFO: File jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:53:06.127: INFO: Lookups using dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 failed for: [wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local] Jan 13 14:53:11.122: INFO: File wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:53:11.128: INFO: File jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local from pod dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 13 14:53:11.128: INFO: Lookups using dns-7988/dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 failed for: [wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local] Jan 13 14:53:16.127: INFO: DNS probes using dns-test-1387cca2-aadb-4770-a07d-94e9d26612c9 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7988.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7988.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7988.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7988.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 14:53:20.258: INFO: DNS probes using dns-test-baf755ec-a54c-4a8b-8b77-02b48b94f56e succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:53:20.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-7988" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":7,"skipped":86,"failed":2,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:53:19.192: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:53:19.241: INFO: Creating ReplicaSet my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18 Jan 13 14:53:19.251: INFO: Pod name my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18: Found 0 pods out of 1 Jan 13 14:53:24.259: INFO: Pod name my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18: Found 1 pods out of 1 Jan 13 14:53:24.259: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18" is running Jan 13 14:53:24.263: INFO: Pod "my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18-chc4z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 14:53:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 14:53:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 14:53:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 14:53:19 +0000 UTC Reason: Message:}]) Jan 13 14:53:24.263: INFO: Trying to dial the pod Jan 13 14:53:29.278: INFO: Controller my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18: Got expected result from replica 1 [my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18-chc4z]: "my-hostname-basic-cdc3c943-dc81-47cf-8fcc-5aa1af86ec18-chc4z", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:53:29.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-1540" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":31,"skipped":704,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":9,"skipped":244,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:46:58.007: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating all guestbook components Jan 13 14:46:58.066: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 13 14:46:58.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 create -f -' Jan 13 14:46:58.342: INFO: stderr: "" Jan 13 14:46:58.342: INFO: stdout: "service/agnhost-replica created\n" Jan 13 14:46:58.342: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 13 14:46:58.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 create -f -' Jan 13 14:46:58.603: INFO: stderr: "" Jan 13 14:46:58.603: INFO: stdout: "service/agnhost-primary created\n" Jan 13 14:46:58.603: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 13 14:46:58.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 create -f -' Jan 13 14:46:58.835: INFO: stderr: "" Jan 13 14:46:58.835: INFO: stdout: "service/frontend created\n" Jan 13 14:46:58.835: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 13 14:46:58.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 create -f -' Jan 13 14:46:59.063: INFO: stderr: "" Jan 13 14:46:59.063: INFO: stdout: "deployment.apps/frontend created\n" Jan 13 14:46:59.063: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 14:46:59.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 create -f -' Jan 13 14:46:59.314: INFO: stderr: "" Jan 13 14:46:59.314: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 13 14:46:59.314: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 14:46:59.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 create -f -' Jan 13 14:46:59.607: INFO: stderr: "" Jan 13 14:46:59.607: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 13 14:46:59.607: INFO: Waiting for all frontend pods to be Running. Jan 13 14:47:04.658: INFO: Waiting for frontend to serve content. Jan 13 14:50:37.519: INFO: Failed to get response from guestbook. err: an error on the server ("unknown") has prevented the request from succeeding (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:56484->192.168.2.16:80: read: connection reset by peer"�0����"� Jan 13 14:50:42.534: INFO: Trying to add a new entry to the guestbook. Jan 13 14:54:16.655: INFO: Failed to get response from guestbook. err: an error on the server ("unknown") has prevented the request from succeeding (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:58546->192.168.2.16:80: read: connection reset by peer"�0����"� Jan 13 14:54:21.655: FAIL: Cannot added new entry in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 +0x159 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0022ab200, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:54:21.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 delete --grace-period=0 --force -f -' Jan 13 14:54:21.759: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:54:21.759: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:54:21.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 delete --grace-period=0 --force -f -' Jan 13 14:54:21.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:54:21.898: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:54:21.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 delete --grace-period=0 --force -f -' Jan 13 14:54:22.032: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:54:22.032: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:54:22.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 delete --grace-period=0 --force -f -' Jan 13 14:54:22.125: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:54:22.125: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:54:22.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 delete --grace-period=0 --force -f -' Jan 13 14:54:22.241: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:54:22.241: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 14:54:22.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1317 delete --grace-period=0 --force -f -' Jan 13 14:54:22.412: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 14:54:22.412: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:54:22.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1317" for this suite. �[91m�[1m• Failure [444.446 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Guestbook application �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342�[0m �[91m�[1mshould create and stop a working application [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 14:54:21.655: Cannot added new entry in 180 seconds.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:50:26.794: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics W0113 14:50:36.911697 16 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled. Jan 13 14:55:36.916: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering. [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:55:36.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-2211" for this suite. �[32m• [SLOW TEST:310.130 seconds]�[0m [sig-api-machinery] Garbage collector �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m should delete pods created by rc when not orphaning [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":27,"skipped":415,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:55:36.928: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 13 14:55:36.964: INFO: Waiting up to 5m0s for pod "pod-cc252ae4-89bc-474b-985b-844f74f64587" in namespace "emptydir-2891" to be "Succeeded or Failed" Jan 13 14:55:36.968: INFO: Pod "pod-cc252ae4-89bc-474b-985b-844f74f64587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208468ms Jan 13 14:55:38.971: INFO: Pod "pod-cc252ae4-89bc-474b-985b-844f74f64587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007675777s �[1mSTEP�[0m: Saw pod success Jan 13 14:55:38.971: INFO: Pod "pod-cc252ae4-89bc-474b-985b-844f74f64587" satisfied condition "Succeeded or Failed" Jan 13 14:55:38.974: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-cc252ae4-89bc-474b-985b-844f74f64587 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:55:38.997: INFO: Waiting for pod pod-cc252ae4-89bc-474b-985b-844f74f64587 to disappear Jan 13 14:55:38.999: INFO: Pod pod-cc252ae4-89bc-474b-985b-844f74f64587 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:55:38.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2891" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":415,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:55:39.030: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod busybox-24108dcf-19a0-45e4-806f-eb90abbc641a in namespace container-probe-6195 Jan 13 14:55:41.070: INFO: Started pod busybox-24108dcf-19a0-45e4-806f-eb90abbc641a in namespace container-probe-6195 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 13 14:55:41.072: INFO: Initial restart count of pod busybox-24108dcf-19a0-45e4-806f-eb90abbc641a is 0 Jan 13 14:56:33.166: INFO: Restart count of pod container-probe-6195/busybox-24108dcf-19a0-45e4-806f-eb90abbc641a is now 1 (52.093669726s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:56:33.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6195" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":433,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:56:33.200: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Jan 13 14:56:35.755: INFO: Successfully updated pod "adopt-release-swdt5" �[1mSTEP�[0m: Checking that the Job readopts the Pod Jan 13 14:56:35.755: INFO: Waiting up to 15m0s for pod "adopt-release-swdt5" in namespace "job-1853" to be "adopted" Jan 13 14:56:35.759: INFO: Pod "adopt-release-swdt5": Phase="Running", Reason="", readiness=true. Elapsed: 3.963453ms Jan 13 14:56:37.763: INFO: Pod "adopt-release-swdt5": Phase="Running", Reason="", readiness=true. Elapsed: 2.007598698s Jan 13 14:56:37.763: INFO: Pod "adopt-release-swdt5" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Jan 13 14:56:38.270: INFO: Successfully updated pod "adopt-release-swdt5" �[1mSTEP�[0m: Checking that the Job releases the Pod Jan 13 14:56:38.270: INFO: Waiting up to 15m0s for pod "adopt-release-swdt5" in namespace "job-1853" to be "released" Jan 13 14:56:38.273: INFO: Pod "adopt-release-swdt5": Phase="Running", Reason="", readiness=true. Elapsed: 2.870075ms Jan 13 14:56:40.276: INFO: Pod "adopt-release-swdt5": Phase="Running", Reason="", readiness=true. Elapsed: 2.006498332s Jan 13 14:56:40.277: INFO: Pod "adopt-release-swdt5" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:56:40.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-1853" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":30,"skipped":443,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:56:40.303: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-f39a62de-f6dd-4bea-9542-559e88ad4fcf �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-28b00504-23dc-4239-ac4b-a9022d1f679a �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Jan 13 14:56:40.349: INFO: Waiting up to 5m0s for pod "projected-volume-c13a28c1-e42d-4a25-bf12-eacad52efc49" in namespace "projected-9994" to be "Succeeded or Failed" Jan 13 14:56:40.353: INFO: Pod "projected-volume-c13a28c1-e42d-4a25-bf12-eacad52efc49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.073481ms Jan 13 14:56:42.356: INFO: Pod "projected-volume-c13a28c1-e42d-4a25-bf12-eacad52efc49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00659181s �[1mSTEP�[0m: Saw pod success Jan 13 14:56:42.356: INFO: Pod "projected-volume-c13a28c1-e42d-4a25-bf12-eacad52efc49" satisfied condition "Succeeded or Failed" Jan 13 14:56:42.359: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy pod projected-volume-c13a28c1-e42d-4a25-bf12-eacad52efc49 container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:56:42.383: INFO: Waiting for pod projected-volume-c13a28c1-e42d-4a25-bf12-eacad52efc49 to disappear Jan 13 14:56:42.386: INFO: Pod projected-volume-c13a28c1-e42d-4a25-bf12-eacad52efc49 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:56:42.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9994" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":453,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:56:42.397: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test service account token: Jan 13 14:56:42.430: INFO: Waiting up to 5m0s for pod "test-pod-15d49a13-4e7d-4a86-acf7-0b994fb247eb" in namespace "svcaccounts-7887" to be "Succeeded or Failed" Jan 13 14:56:42.437: INFO: Pod "test-pod-15d49a13-4e7d-4a86-acf7-0b994fb247eb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.77741ms Jan 13 14:56:44.441: INFO: Pod "test-pod-15d49a13-4e7d-4a86-acf7-0b994fb247eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010722503s �[1mSTEP�[0m: Saw pod success Jan 13 14:56:44.441: INFO: Pod "test-pod-15d49a13-4e7d-4a86-acf7-0b994fb247eb" satisfied condition "Succeeded or Failed" Jan 13 14:56:44.444: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-f7pjhy pod test-pod-15d49a13-4e7d-4a86-acf7-0b994fb247eb container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:56:44.463: INFO: Waiting for pod test-pod-15d49a13-4e7d-4a86-acf7-0b994fb247eb to disappear Jan 13 14:56:44.466: INFO: Pod test-pod-15d49a13-4e7d-4a86-acf7-0b994fb247eb no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:56:44.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-7887" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":32,"skipped":454,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:53:20.472: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-6954 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-6954 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-6954 I0113 14:53:20.609639 14 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-6954, replica count: 3 I0113 14:53:23.660530 14 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 14:53:23.675: INFO: Creating new exec pod Jan 13 14:53:26.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 13 14:53:27.334: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 13 14:53:27.334: INFO: stdout: "" Jan 13 14:53:27.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c nc -zv -t -w 2 10.141.65.131 80' Jan 13 14:53:27.645: INFO: stderr: "+ nc -zv -t -w 2 10.141.65.131 80\nConnection to 10.141.65.131 80 port [tcp/http] succeeded!\n" Jan 13 14:53:27.645: INFO: stdout: "" Jan 13 14:53:27.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 32258' Jan 13 14:53:27.977: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 32258\nConnection to 172.18.0.4 32258 port [tcp/32258] succeeded!\n" Jan 13 14:53:27.977: INFO: stdout: "" Jan 13 14:53:27.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 32258' Jan 13 14:53:28.347: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 32258\nConnection to 172.18.0.5 32258 port [tcp/32258] succeeded!\n" Jan 13 14:53:28.347: INFO: stdout: "" Jan 13 14:53:28.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32258/ ; done' Jan 13 14:54:18.733: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n" Jan 13 14:54:18.733: INFO: stdout: "\n" Jan 13 14:54:48.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32258/ ; done' Jan 13 14:55:38.911: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n" Jan 13 14:55:38.911: INFO: stdout: "\naffinity-nodeport-transition-jgm99\n" Jan 13 14:55:38.911: INFO: Received response from host: affinity-nodeport-transition-jgm99 Jan 13 14:55:48.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32258/ ; done' Jan 13 14:56:38.933: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n" Jan 13 14:56:38.934: INFO: stdout: "\naffinity-nodeport-transition-jgm99\naffinity-nodeport-transition-cq4m9\n" Jan 13 14:56:38.934: INFO: Received response from host: affinity-nodeport-transition-jgm99 Jan 13 14:56:38.934: INFO: Received response from host: affinity-nodeport-transition-cq4m9 Jan 13 14:56:38.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6954 exec execpod-affinityss4pt -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:32258/ ; done' Jan 13 14:57:29.127: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:32258/\n" Jan 13 14:57:29.128: INFO: stdout: "\naffinity-nodeport-transition-jgm99\naffinity-nodeport-transition-cq4m9\n" Jan 13 14:57:29.128: INFO: Received response from host: affinity-nodeport-transition-jgm99 Jan 13 14:57:29.128: INFO: Received response from host: affinity-nodeport-transition-cq4m9 Jan 13 14:57:29.128: INFO: [affinity-nodeport-transition-jgm99 affinity-nodeport-transition-jgm99 affinity-nodeport-transition-cq4m9 affinity-nodeport-transition-jgm99 affinity-nodeport-transition-cq4m9] Jan 13 14:57:29.128: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc0019642c0, 0xc003129800, 0xc002e40dc0, 0xa, 0x7e02, 0x0, 0xc003129800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000e914a0, 0x56112e0, 0xc0019642c0, 0xc000c82000, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3399 k8s.io/kubernetes/test/e2e/network.glob..func24.30() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2485 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000c36180, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 13 14:57:29.129: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-6954, will wait for the garbage collector to delete the pods Jan 13 14:57:29.199: INFO: Deleting ReplicationController affinity-nodeport-transition took: 5.639538ms Jan 13 14:57:29.699: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.373854ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:57:42.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6954" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [262.277 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 14:57:29.128: Connection timed out or not enough responses.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:56:44.506: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-fb492e1b-2cf9-41d5-88df-c537592a8131 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Updating configmap configmap-test-upd-fb492e1b-2cf9-41d5-88df-c537592a8131 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:00.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9171" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":478,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:00.836: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 14:58:01.604: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 14:58:04.623: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API �[1mSTEP�[0m: create a namespace for the webhook �[1mSTEP�[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:04.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7979" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7979-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":34,"skipped":479,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:04.816: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 13 14:58:04.853: INFO: Waiting up to 5m0s for pod "pod-d1c36caf-573e-402c-a348-3f708353b12e" in namespace "emptydir-3485" to be "Succeeded or Failed" Jan 13 14:58:04.858: INFO: Pod "pod-d1c36caf-573e-402c-a348-3f708353b12e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608243ms Jan 13 14:58:06.863: INFO: Pod "pod-d1c36caf-573e-402c-a348-3f708353b12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009110733s �[1mSTEP�[0m: Saw pod success Jan 13 14:58:06.863: INFO: Pod "pod-d1c36caf-573e-402c-a348-3f708353b12e" satisfied condition "Succeeded or Failed" Jan 13 14:58:06.865: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-d1c36caf-573e-402c-a348-3f708353b12e container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:58:06.887: INFO: Waiting for pod pod-d1c36caf-573e-402c-a348-3f708353b12e to disappear Jan 13 14:58:06.890: INFO: Pod pod-d1c36caf-573e-402c-a348-3f708353b12e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:06.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3485" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":511,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:06.903: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Given a ReplicationController is created �[1mSTEP�[0m: When the matched label of one of its pods change Jan 13 14:58:06.937: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 13 14:58:11.941: INFO: Pod name pod-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:12.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-1265" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":36,"skipped":514,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:12.973: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 13 14:58:17.053: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 14:58:17.056: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 14:58:19.056: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 14:58:19.061: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 14:58:21.056: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 14:58:21.060: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 14:58:23.056: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 14:58:23.061: INFO: Pod pod-with-poststart-exec-hook still exists Jan 13 14:58:25.056: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 13 14:58:25.060: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:25.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5993" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":520,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:25.103: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-2724 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-2724 I0113 14:58:25.175852 16 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2724, replica count: 2 I0113 14:58:28.226792 16 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 14:58:28.226: INFO: Creating new exec pod Jan 13 14:58:31.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2724 exec execpodknrt4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Jan 13 14:58:31.449: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 13 14:58:31.449: INFO: stdout: "" Jan 13 14:58:31.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2724 exec execpodknrt4 -- /bin/sh -x -c nc -zv -t -w 2 10.140.3.44 80' Jan 13 14:58:31.636: INFO: stderr: "+ nc -zv -t -w 2 10.140.3.44 80\nConnection to 10.140.3.44 80 port [tcp/http] succeeded!\n" Jan 13 14:58:31.636: INFO: stdout: "" Jan 13 14:58:31.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2724 exec execpodknrt4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 32225' Jan 13 14:58:31.793: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 32225\nConnection to 172.18.0.4 32225 port [tcp/32225] succeeded!\n" Jan 13 14:58:31.793: INFO: stdout: "" Jan 13 14:58:31.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2724 exec execpodknrt4 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 32225' Jan 13 14:58:31.980: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 32225\nConnection to 172.18.0.5 32225 port [tcp/32225] succeeded!\n" Jan 13 14:58:31.980: INFO: stdout: "" Jan 13 14:58:31.980: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:32.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2724" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":38,"skipped":543,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:32.171: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-3255c39f-09df-42ed-8aad-cfccddcd2927 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:58:32.222: INFO: Waiting up to 5m0s for pod "pod-configmaps-ebc50bd6-7b9b-4078-a94e-1e9b196eac92" in namespace "configmap-5799" to be "Succeeded or Failed" Jan 13 14:58:32.226: INFO: Pod "pod-configmaps-ebc50bd6-7b9b-4078-a94e-1e9b196eac92": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283886ms Jan 13 14:58:34.229: INFO: Pod "pod-configmaps-ebc50bd6-7b9b-4078-a94e-1e9b196eac92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00770328s �[1mSTEP�[0m: Saw pod success Jan 13 14:58:34.230: INFO: Pod "pod-configmaps-ebc50bd6-7b9b-4078-a94e-1e9b196eac92" satisfied condition "Succeeded or Failed" Jan 13 14:58:34.232: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-ceauut pod pod-configmaps-ebc50bd6-7b9b-4078-a94e-1e9b196eac92 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:58:34.247: INFO: Waiting for pod pod-configmaps-ebc50bd6-7b9b-4078-a94e-1e9b196eac92 to disappear Jan 13 14:58:34.250: INFO: Pod pod-configmaps-ebc50bd6-7b9b-4078-a94e-1e9b196eac92 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:34.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5799" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":621,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:34.263: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting the auto-created API token Jan 13 14:58:34.809: INFO: created pod pod-service-account-defaultsa Jan 13 14:58:34.810: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 13 14:58:34.817: INFO: created pod pod-service-account-mountsa Jan 13 14:58:34.817: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 13 14:58:34.825: INFO: created pod pod-service-account-nomountsa Jan 13 14:58:34.825: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 13 14:58:34.830: INFO: created pod pod-service-account-defaultsa-mountspec Jan 13 14:58:34.831: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 13 14:58:34.835: INFO: created pod pod-service-account-mountsa-mountspec Jan 13 14:58:34.836: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 13 14:58:34.847: INFO: created pod pod-service-account-nomountsa-mountspec Jan 13 14:58:34.847: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 13 14:58:34.854: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 13 14:58:34.855: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 13 14:58:34.862: INFO: created pod pod-service-account-mountsa-nomountspec Jan 13 14:58:34.862: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 13 14:58:34.884: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 13 14:58:34.884: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:34.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-2532" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":40,"skipped":624,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:34.906: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:34.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8101" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":41,"skipped":624,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:35.023: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a ReplicationController �[1mSTEP�[0m: waiting for RC to be added �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: patching ReplicationController �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: patching ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: fetching ReplicationController status �[1mSTEP�[0m: patching ReplicationController scale �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for ReplicationController's scale to be the max amount �[1mSTEP�[0m: fetching ReplicationController; ensuring that it's patched �[1mSTEP�[0m: updating ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: listing all ReplicationControllers �[1mSTEP�[0m: checking that ReplicationController has expected values �[1mSTEP�[0m: deleting ReplicationControllers by collection �[1mSTEP�[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:58:38.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":42,"skipped":638,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:58:38.248: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5836 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5836;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5836 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5836;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5836.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5836.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5836.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5836.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 226.205.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.205.226_udp@PTR;check="$$(dig +tcp +noall +answer +search 226.205.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.205.226_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5836 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5836;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5836 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5836;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5836.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5836.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5836.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5836.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5836.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5836.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5836.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 226.205.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.205.226_udp@PTR;check="$$(dig +tcp +noall +answer +search 226.205.128.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.128.205.226_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 14:58:40.335: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.341: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.349: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.356: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.367: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.381: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.447: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.451: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.457: INFO: Unable to read jessie_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.464: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.468: INFO: Unable to read jessie_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.473: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.480: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.491: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:40.526: INFO: Lookups using dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5836 wheezy_tcp@dns-test-service.dns-5836 wheezy_udp@dns-test-service.dns-5836.svc wheezy_tcp@dns-test-service.dns-5836.svc wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5836 jessie_tcp@dns-test-service.dns-5836 jessie_udp@dns-test-service.dns-5836.svc jessie_tcp@dns-test-service.dns-5836.svc jessie_udp@_http._tcp.dns-test-service.dns-5836.svc jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc] Jan 13 14:58:45.529: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.533: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.536: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.539: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.542: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.545: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.548: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.554: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.575: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.578: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.581: INFO: Unable to read jessie_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.584: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.587: INFO: Unable to read jessie_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.590: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.593: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.596: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:45.614: INFO: Lookups using dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5836 wheezy_tcp@dns-test-service.dns-5836 wheezy_udp@dns-test-service.dns-5836.svc wheezy_tcp@dns-test-service.dns-5836.svc wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5836 jessie_tcp@dns-test-service.dns-5836 jessie_udp@dns-test-service.dns-5836.svc jessie_tcp@dns-test-service.dns-5836.svc jessie_udp@_http._tcp.dns-test-service.dns-5836.svc jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc] Jan 13 14:58:50.530: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.533: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.537: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.540: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.550: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.553: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.575: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.578: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.580: INFO: Unable to read jessie_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.586: INFO: Unable to read jessie_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.588: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.591: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.594: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:50.610: INFO: Lookups using dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5836 wheezy_tcp@dns-test-service.dns-5836 wheezy_udp@dns-test-service.dns-5836.svc wheezy_tcp@dns-test-service.dns-5836.svc wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5836 jessie_tcp@dns-test-service.dns-5836 jessie_udp@dns-test-service.dns-5836.svc jessie_tcp@dns-test-service.dns-5836.svc jessie_udp@_http._tcp.dns-test-service.dns-5836.svc jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc] Jan 13 14:58:55.534: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.537: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.540: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.543: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.546: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.549: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.553: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.555: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.576: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.579: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.582: INFO: Unable to read jessie_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.586: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.589: INFO: Unable to read jessie_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.592: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.595: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.599: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:58:55.618: INFO: Lookups using dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5836 wheezy_tcp@dns-test-service.dns-5836 wheezy_udp@dns-test-service.dns-5836.svc wheezy_tcp@dns-test-service.dns-5836.svc wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5836 jessie_tcp@dns-test-service.dns-5836 jessie_udp@dns-test-service.dns-5836.svc jessie_tcp@dns-test-service.dns-5836.svc jessie_udp@_http._tcp.dns-test-service.dns-5836.svc jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc] Jan 13 14:59:00.531: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.534: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.537: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.540: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.544: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.547: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.550: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.554: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.575: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.578: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.580: INFO: Unable to read jessie_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.583: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.586: INFO: Unable to read jessie_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.588: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.591: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.594: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:00.610: INFO: Lookups using dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5836 wheezy_tcp@dns-test-service.dns-5836 wheezy_udp@dns-test-service.dns-5836.svc wheezy_tcp@dns-test-service.dns-5836.svc wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5836 jessie_tcp@dns-test-service.dns-5836 jessie_udp@dns-test-service.dns-5836.svc jessie_tcp@dns-test-service.dns-5836.svc jessie_udp@_http._tcp.dns-test-service.dns-5836.svc jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc] Jan 13 14:59:05.530: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.534: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.537: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.540: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.543: INFO: Unable to read wheezy_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.546: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.549: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.551: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.572: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.575: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.577: INFO: Unable to read jessie_udp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.579: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836 from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.582: INFO: Unable to read jessie_udp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.585: INFO: Unable to read jessie_tcp@dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.587: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.591: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc from pod dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30: the server could not find the requested resource (get pods dns-test-81daffac-bfaf-400c-934a-b651b4d6af30) Jan 13 14:59:05.607: INFO: Lookups using dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5836 wheezy_tcp@dns-test-service.dns-5836 wheezy_udp@dns-test-service.dns-5836.svc wheezy_tcp@dns-test-service.dns-5836.svc wheezy_udp@_http._tcp.dns-test-service.dns-5836.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5836.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5836 jessie_tcp@dns-test-service.dns-5836 jessie_udp@dns-test-service.dns-5836.svc jessie_tcp@dns-test-service.dns-5836.svc jessie_udp@_http._tcp.dns-test-service.dns-5836.svc jessie_tcp@_http._tcp.dns-test-service.dns-5836.svc] Jan 13 14:59:10.613: INFO: DNS probes using dns-5836/dns-test-81daffac-bfaf-400c-934a-b651b4d6af30 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:10.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5836" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":640,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:10.796: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Jan 13 14:59:10.844: INFO: Waiting up to 5m0s for pod "pod-b9975303-6c0e-414c-a90b-676af2306a13" in namespace "emptydir-6355" to be "Succeeded or Failed" Jan 13 14:59:10.847: INFO: Pod "pod-b9975303-6c0e-414c-a90b-676af2306a13": Phase="Pending", Reason="", readiness=false. Elapsed: 3.197332ms Jan 13 14:59:12.851: INFO: Pod "pod-b9975303-6c0e-414c-a90b-676af2306a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006772941s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:12.851: INFO: Pod "pod-b9975303-6c0e-414c-a90b-676af2306a13" satisfied condition "Succeeded or Failed" Jan 13 14:59:12.854: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-b9975303-6c0e-414c-a90b-676af2306a13 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:12.869: INFO: Waiting for pod pod-b9975303-6c0e-414c-a90b-676af2306a13 to disappear Jan 13 14:59:12.875: INFO: Pod pod-b9975303-6c0e-414c-a90b-676af2306a13 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:12.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-6355" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":675,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:12.895: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap configmap-1835/configmap-test-7497f8cb-acc7-4cbb-b18d-ec3b70213952 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:59:12.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-c9af1e7d-08cc-4448-9061-a058dbee7afd" in namespace "configmap-1835" to be "Succeeded or Failed" Jan 13 14:59:12.933: INFO: Pod "pod-configmaps-c9af1e7d-08cc-4448-9061-a058dbee7afd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.085851ms Jan 13 14:59:14.936: INFO: Pod "pod-configmaps-c9af1e7d-08cc-4448-9061-a058dbee7afd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006472414s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:14.936: INFO: Pod "pod-configmaps-c9af1e7d-08cc-4448-9061-a058dbee7afd" satisfied condition "Succeeded or Failed" Jan 13 14:59:14.939: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-configmaps-c9af1e7d-08cc-4448-9061-a058dbee7afd container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:14.953: INFO: Waiting for pod pod-configmaps-c9af1e7d-08cc-4448-9061-a058dbee7afd to disappear Jan 13 14:59:14.956: INFO: Pod pod-configmaps-c9af1e7d-08cc-4448-9061-a058dbee7afd no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:14.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1835" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":683,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:14.988: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test override arguments Jan 13 14:59:15.025: INFO: Waiting up to 5m0s for pod "client-containers-a76b0e6f-4b0c-4ceb-bd79-76776d681521" in namespace "containers-2450" to be "Succeeded or Failed" Jan 13 14:59:15.028: INFO: Pod "client-containers-a76b0e6f-4b0c-4ceb-bd79-76776d681521": Phase="Pending", Reason="", readiness=false. Elapsed: 2.384795ms Jan 13 14:59:17.032: INFO: Pod "client-containers-a76b0e6f-4b0c-4ceb-bd79-76776d681521": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006613419s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:17.032: INFO: Pod "client-containers-a76b0e6f-4b0c-4ceb-bd79-76776d681521" satisfied condition "Succeeded or Failed" Jan 13 14:59:17.034: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod client-containers-a76b0e6f-4b0c-4ceb-bd79-76776d681521 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:17.046: INFO: Waiting for pod client-containers-a76b0e6f-4b0c-4ceb-bd79-76776d681521 to disappear Jan 13 14:59:17.049: INFO: Pod client-containers-a76b0e6f-4b0c-4ceb-bd79-76776d681521 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:17.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-2450" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":700,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:17.058: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 14:59:17.593: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 14:59:20.617: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:59:20.621: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:21.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-6971" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":47,"skipped":700,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:21.797: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 14:59:23.864: INFO: Deleting pod "var-expansion-4e11e3d1-1350-4a8d-a425-312e09f704ed" in namespace "var-expansion-2266" Jan 13 14:59:23.868: INFO: Wait up to 5m0s for pod "var-expansion-4e11e3d1-1350-4a8d-a425-312e09f704ed" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:25.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-2266" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][Slow] [Conformance]","total":-1,"completed":48,"skipped":700,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:53:29.316: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-1728 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-1728 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-1728 Jan 13 14:53:29.381: INFO: Found 0 stateful pods, waiting for 1 Jan 13 14:53:39.384: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 13 14:53:39.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 14:53:39.579: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 14:53:39.579: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 14:53:39.580: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 14:53:39.583: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 13 14:53:49.586: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 14:53:49.587: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 14:53:49.597: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:53:49.598: INFO: ss-0 k8s-upgrade-and-conformance-4w1i3t-worker-ceauut Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:39 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC }] Jan 13 14:53:49.598: INFO: Jan 13 14:53:49.598: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 13 14:53:50.601: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997317706s Jan 13 14:53:51.606: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.993623832s Jan 13 14:53:52.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.989231363s Jan 13 14:53:53.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.9846132s Jan 13 14:53:54.619: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.980280075s Jan 13 14:53:55.624: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.975602069s Jan 13 14:53:56.629: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.970953921s Jan 13 14:53:57.633: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.965937024s Jan 13 14:53:58.637: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.179615ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1728 Jan 13 14:53:59.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:53:59.844: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 13 14:53:59.844: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 14:53:59.844: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 14:53:59.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:54:00.033: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 13 14:54:00.033: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 14:54:00.033: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 14:54:00.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:54:00.219: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Jan 13 14:54:00.219: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 14:54:00.219: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 14:54:00.223: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 13 14:54:10.227: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 14:54:10.227: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 14:54:10.227: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Jan 13 14:54:10.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 14:54:10.442: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 14:54:10.442: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 14:54:10.442: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 14:54:10.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 14:54:10.614: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 14:54:10.614: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 14:54:10.614: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 14:54:10.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 14:54:10.798: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 14:54:10.798: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 14:54:10.798: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 14:54:10.798: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 14:54:10.803: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 13 14:54:20.811: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 14:54:20.811: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 13 14:54:20.811: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 13 14:54:20.822: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:20.822: INFO: ss-0 k8s-upgrade-and-conformance-4w1i3t-worker-ceauut Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC }] Jan 13 14:54:20.822: INFO: ss-1 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:20.822: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:20.822: INFO: Jan 13 14:54:20.822: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 14:54:21.835: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:21.835: INFO: ss-0 k8s-upgrade-and-conformance-4w1i3t-worker-ceauut Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC }] Jan 13 14:54:21.835: INFO: ss-1 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:21.835: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:21.835: INFO: Jan 13 14:54:21.835: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 13 14:54:22.839: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:22.839: INFO: ss-0 k8s-upgrade-and-conformance-4w1i3t-worker-ceauut Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:29 +0000 UTC }] Jan 13 14:54:22.839: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:22.839: INFO: Jan 13 14:54:22.839: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 13 14:54:23.843: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:23.843: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:23.843: INFO: Jan 13 14:54:23.843: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 13 14:54:24.847: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:24.847: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:24.847: INFO: Jan 13 14:54:24.847: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 13 14:54:25.854: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:25.854: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:25.854: INFO: Jan 13 14:54:25.854: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 13 14:54:26.858: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:26.858: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:26.858: INFO: Jan 13 14:54:26.858: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 13 14:54:27.864: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:27.864: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:27.864: INFO: Jan 13 14:54:27.864: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 13 14:54:28.867: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:28.868: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:28.868: INFO: Jan 13 14:54:28.868: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 13 14:54:29.871: INFO: POD NODE PHASE GRACE CONDITIONS Jan 13 14:54:29.871: INFO: ss-2 k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:54:10 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 14:53:49 +0000 UTC }] Jan 13 14:54:29.871: INFO: Jan 13 14:54:29.871: INFO: StatefulSet ss has not reached scale 0, at 1 �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1728 Jan 13 14:54:30.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:54:30.995: INFO: rc: 1 Jan 13 14:54:30.995: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 13 14:54:40.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:54:41.083: INFO: rc: 1 Jan 13 14:54:41.083: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:54:51.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:54:51.169: INFO: rc: 1 Jan 13 14:54:51.169: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:55:01.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:55:01.275: INFO: rc: 1 Jan 13 14:55:01.275: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:55:11.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:55:11.385: INFO: rc: 1 Jan 13 14:55:11.385: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:55:21.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:55:21.500: INFO: rc: 1 Jan 13 14:55:21.500: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:55:31.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:55:31.590: INFO: rc: 1 Jan 13 14:55:31.590: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:55:41.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:55:41.683: INFO: rc: 1 Jan 13 14:55:41.683: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:55:51.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:55:51.787: INFO: rc: 1 Jan 13 14:55:51.787: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:56:01.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:56:01.881: INFO: rc: 1 Jan 13 14:56:01.881: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:56:11.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:56:11.966: INFO: rc: 1 Jan 13 14:56:11.966: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:56:21.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:56:22.056: INFO: rc: 1 Jan 13 14:56:22.056: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:56:32.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:56:32.153: INFO: rc: 1 Jan 13 14:56:32.153: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:56:42.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:56:42.260: INFO: rc: 1 Jan 13 14:56:42.260: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:56:52.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:56:52.350: INFO: rc: 1 Jan 13 14:56:52.350: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:57:02.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:57:02.451: INFO: rc: 1 Jan 13 14:57:02.451: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:57:12.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:57:12.539: INFO: rc: 1 Jan 13 14:57:12.539: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:57:22.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:57:22.630: INFO: rc: 1 Jan 13 14:57:22.630: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:57:32.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:57:32.723: INFO: rc: 1 Jan 13 14:57:32.723: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:57:42.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:57:42.913: INFO: rc: 1 Jan 13 14:57:42.913: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:57:52.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:57:52.998: INFO: rc: 1 Jan 13 14:57:52.998: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:58:02.998: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:58:03.091: INFO: rc: 1 Jan 13 14:58:03.091: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:58:13.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:58:13.180: INFO: rc: 1 Jan 13 14:58:13.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:58:23.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:58:23.270: INFO: rc: 1 Jan 13 14:58:23.270: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:58:33.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:58:33.359: INFO: rc: 1 Jan 13 14:58:33.359: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:58:43.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:58:43.464: INFO: rc: 1 Jan 13 14:58:43.465: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:58:53.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:58:53.568: INFO: rc: 1 Jan 13 14:58:53.569: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:59:03.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:59:03.659: INFO: rc: 1 Jan 13 14:59:03.659: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:59:13.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:59:13.754: INFO: rc: 1 Jan 13 14:59:13.754: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:59:23.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:59:23.838: INFO: rc: 1 Jan 13 14:59:23.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 13 14:59:33.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-1728 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 14:59:33.933: INFO: rc: 1 Jan 13 14:59:33.933: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: Jan 13 14:59:33.933: INFO: Scaling statefulset ss to 0 Jan 13 14:59:33.953: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 14:59:33.957: INFO: Deleting all statefulset in ns statefulset-1728 Jan 13 14:59:33.960: INFO: Scaling statefulset ss to 0 Jan 13 14:59:33.970: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 14:59:33.973: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:33.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-1728" for this suite. �[32m• [SLOW TEST:364.681 seconds]�[0m [sig-apps] StatefulSet �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":32,"skipped":710,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:25.892: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:36.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7549" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":49,"skipped":705,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:34.066: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 14:59:34.564: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 14:59:37.590: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:37.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-16" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-16-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":33,"skipped":756,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:36.987: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap configmap-9644/configmap-test-e0c1d19c-5ff0-4791-8316-04356e751ad5 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:59:37.029: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ffced6c-f96e-426b-8181-dcaa6cab2517" in namespace "configmap-9644" to be "Succeeded or Failed" Jan 13 14:59:37.032: INFO: Pod "pod-configmaps-7ffced6c-f96e-426b-8181-dcaa6cab2517": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798179ms Jan 13 14:59:39.035: INFO: Pod "pod-configmaps-7ffced6c-f96e-426b-8181-dcaa6cab2517": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005843444s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:39.035: INFO: Pod "pod-configmaps-7ffced6c-f96e-426b-8181-dcaa6cab2517" satisfied condition "Succeeded or Failed" Jan 13 14:59:39.038: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-configmaps-7ffced6c-f96e-426b-8181-dcaa6cab2517 container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:39.055: INFO: Waiting for pod pod-configmaps-7ffced6c-f96e-426b-8181-dcaa6cab2517 to disappear Jan 13 14:59:39.059: INFO: Pod pod-configmaps-7ffced6c-f96e-426b-8181-dcaa6cab2517 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:39.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9644" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":720,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:39.120: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Jan 13 14:59:39.150: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Jan 13 14:59:39.150: INFO: cleanMinorVersion: 20 Jan 13 14:59:39.150: INFO: Minor version: 20 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:39.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-9965" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":51,"skipped":757,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:37.764: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 14:59:37.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a371fa6-f992-4526-bbcd-ee6b6df05e08" in namespace "downward-api-2968" to be "Succeeded or Failed" Jan 13 14:59:37.821: INFO: Pod "downwardapi-volume-8a371fa6-f992-4526-bbcd-ee6b6df05e08": Phase="Pending", Reason="", readiness=false. Elapsed: 3.630385ms Jan 13 14:59:39.824: INFO: Pod "downwardapi-volume-8a371fa6-f992-4526-bbcd-ee6b6df05e08": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00721953s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:39.824: INFO: Pod "downwardapi-volume-8a371fa6-f992-4526-bbcd-ee6b6df05e08" satisfied condition "Succeeded or Failed" Jan 13 14:59:39.827: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downwardapi-volume-8a371fa6-f992-4526-bbcd-ee6b6df05e08 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:39.840: INFO: Waiting for pod downwardapi-volume-8a371fa6-f992-4526-bbcd-ee6b6df05e08 to disappear Jan 13 14:59:39.842: INFO: Pod downwardapi-volume-8a371fa6-f992-4526-bbcd-ee6b6df05e08 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:39.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2968" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":767,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:39.859: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:52 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the pod with lifecycle hook �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 13 14:59:43.926: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 14:59:43.930: INFO: Pod pod-with-poststart-http-hook still exists Jan 13 14:59:45.930: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 14:59:45.934: INFO: Pod pod-with-poststart-http-hook still exists Jan 13 14:59:47.930: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 13 14:59:47.933: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:47.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-8465" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":771,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:47.983: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-e03a95cd-78a0-44ac-94fa-55faf0f75069 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:59:48.027: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d37d32e8-ab07-43a4-9d47-130c2ebf9b12" in namespace "projected-6326" to be "Succeeded or Failed" Jan 13 14:59:48.030: INFO: Pod "pod-projected-configmaps-d37d32e8-ab07-43a4-9d47-130c2ebf9b12": Phase="Pending", Reason="", readiness=false. Elapsed: 3.001985ms Jan 13 14:59:50.033: INFO: Pod "pod-projected-configmaps-d37d32e8-ab07-43a4-9d47-130c2ebf9b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006184508s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:50.033: INFO: Pod "pod-projected-configmaps-d37d32e8-ab07-43a4-9d47-130c2ebf9b12" satisfied condition "Succeeded or Failed" Jan 13 14:59:50.036: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-projected-configmaps-d37d32e8-ab07-43a4-9d47-130c2ebf9b12 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:50.060: INFO: Waiting for pod pod-projected-configmaps-d37d32e8-ab07-43a4-9d47-130c2ebf9b12 to disappear Jan 13 14:59:50.067: INFO: Pod pod-projected-configmaps-d37d32e8-ab07-43a4-9d47-130c2ebf9b12 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:50.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6326" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":800,"failed":0} [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:50.084: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-4e091961-a68f-4478-868b-dec8a56a3604 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 14:59:50.134: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba51e3fa-96f5-45c8-a01f-2f415cb6ed04" in namespace "configmap-4651" to be "Succeeded or Failed" Jan 13 14:59:50.139: INFO: Pod "pod-configmaps-ba51e3fa-96f5-45c8-a01f-2f415cb6ed04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.203703ms Jan 13 14:59:52.142: INFO: Pod "pod-configmaps-ba51e3fa-96f5-45c8-a01f-2f415cb6ed04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007947474s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:52.142: INFO: Pod "pod-configmaps-ba51e3fa-96f5-45c8-a01f-2f415cb6ed04" satisfied condition "Succeeded or Failed" Jan 13 14:59:52.145: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-configmaps-ba51e3fa-96f5-45c8-a01f-2f415cb6ed04 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:52.160: INFO: Waiting for pod pod-configmaps-ba51e3fa-96f5-45c8-a01f-2f415cb6ed04 to disappear Jan 13 14:59:52.164: INFO: Pod pod-configmaps-ba51e3fa-96f5-45c8-a01f-2f415cb6ed04 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:52.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4651" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":800,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:39.193: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: set up a multi version CRD Jan 13 14:59:39.224: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:54.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7461" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":52,"skipped":782,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:54.737: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 13 14:59:54.772: INFO: Waiting up to 5m0s for pod "downward-api-0beeb5b1-4593-47fb-9b07-f92bd9c44498" in namespace "downward-api-4033" to be "Succeeded or Failed" Jan 13 14:59:54.775: INFO: Pod "downward-api-0beeb5b1-4593-47fb-9b07-f92bd9c44498": Phase="Pending", Reason="", readiness=false. Elapsed: 2.545948ms Jan 13 14:59:56.779: INFO: Pod "downward-api-0beeb5b1-4593-47fb-9b07-f92bd9c44498": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006601916s �[1mSTEP�[0m: Saw pod success Jan 13 14:59:56.779: INFO: Pod "downward-api-0beeb5b1-4593-47fb-9b07-f92bd9c44498" satisfied condition "Succeeded or Failed" Jan 13 14:59:56.781: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downward-api-0beeb5b1-4593-47fb-9b07-f92bd9c44498 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 14:59:56.796: INFO: Waiting for pod downward-api-0beeb5b1-4593-47fb-9b07-f92bd9c44498 to disappear Jan 13 14:59:56.799: INFO: Pod downward-api-0beeb5b1-4593-47fb-9b07-f92bd9c44498 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:56.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4033" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":782,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:56.825: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 13 14:59:58.878: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 14:59:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-3981" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":791,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:58.941: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:00:05.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-6410" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":55,"skipped":824,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:00:06.030: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap that has name configmap-test-emptyKey-0ce4717f-0e2b-460f-abc7-84fe04d8e454 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:00:06.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4809" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":56,"skipped":848,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":7,"skipped":110,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:57:42.755: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-3788 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-3788 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-3788 I0113 14:57:42.833234 14 runners.go:190] Created replication controller with name: affinity-nodeport-transition, namespace: services-3788, replica count: 3 I0113 14:57:45.884482 14 runners.go:190] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 14:57:45.894: INFO: Creating new exec pod Jan 13 14:57:48.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-transition 80' Jan 13 14:57:49.084: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 13 14:57:49.084: INFO: stdout: "" Jan 13 14:57:49.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c nc -zv -t -w 2 10.128.95.115 80' Jan 13 14:57:49.255: INFO: stderr: "+ nc -zv -t -w 2 10.128.95.115 80\nConnection to 10.128.95.115 80 port [tcp/http] succeeded!\n" Jan 13 14:57:49.255: INFO: stdout: "" Jan 13 14:57:49.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31141' Jan 13 14:57:49.436: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.6 31141\nConnection to 172.18.0.6 31141 port [tcp/31141] succeeded!\n" Jan 13 14:57:49.436: INFO: stdout: "" Jan 13 14:57:49.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 31141' Jan 13 14:57:49.599: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 31141\nConnection to 172.18.0.4 31141 port [tcp/31141] succeeded!\n" Jan 13 14:57:49.599: INFO: stdout: "" Jan 13 14:57:49.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31141/ ; done' Jan 13 14:58:39.813: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n" Jan 13 14:58:39.813: INFO: stdout: "\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\n" Jan 13 14:58:39.813: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 14:58:39.813: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 14:59:09.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31141/ ; done' Jan 13 15:00:00.035: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n" Jan 13 15:00:00.036: INFO: stdout: "\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-rjdfx\n" Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:00:00.036: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:00:09.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31141/ ; done' Jan 13 15:01:00.095: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n" Jan 13 15:01:00.095: INFO: stdout: "\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\naffinity-nodeport-transition-rjdfx\n" Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.095: INFO: Received response from host: affinity-nodeport-transition-rjdfx Jan 13 15:01:00.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3788 exec execpod-affinitytd77h -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31141/ ; done' Jan 13 15:01:00.406: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31141/\n" Jan 13 15:01:00.406: INFO: stdout: "\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn\naffinity-nodeport-transition-7fckn" Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Received response from host: affinity-nodeport-transition-7fckn Jan 13 15:01:00.406: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-3788, will wait for the garbage collector to delete the pods Jan 13 15:01:00.481: INFO: Deleting ReplicationController affinity-nodeport-transition took: 6.575491ms Jan 13 15:01:00.982: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 500.456916ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:12.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3788" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m• [SLOW TEST:209.971 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":110,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:12.801: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:01:12.887: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 13 15:01:15.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8285 --namespace=crd-publish-openapi-8285 create -f -' Jan 13 15:01:16.119: INFO: stderr: "" Jan 13 15:01:16.119: INFO: stdout: "e2e-test-crd-publish-openapi-5333-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 13 15:01:16.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8285 --namespace=crd-publish-openapi-8285 delete e2e-test-crd-publish-openapi-5333-crds test-cr' Jan 13 15:01:16.213: INFO: stderr: "" Jan 13 15:01:16.213: INFO: stdout: "e2e-test-crd-publish-openapi-5333-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 13 15:01:16.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8285 --namespace=crd-publish-openapi-8285 apply -f -' Jan 13 15:01:16.456: INFO: stderr: "" Jan 13 15:01:16.456: INFO: stdout: "e2e-test-crd-publish-openapi-5333-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 13 15:01:16.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8285 --namespace=crd-publish-openapi-8285 delete e2e-test-crd-publish-openapi-5333-crds test-cr' Jan 13 15:01:16.559: INFO: stderr: "" Jan 13 15:01:16.559: INFO: stdout: "e2e-test-crd-publish-openapi-5333-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 13 15:01:16.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8285 explain e2e-test-crd-publish-openapi-5333-crds' Jan 13 15:01:16.815: INFO: stderr: "" Jan 13 15:01:16.815: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5333-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:19.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-8285" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":9,"skipped":148,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:19.200: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a watch on configmaps with a certain label �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: changing the label value of the configmap �[1mSTEP�[0m: Expecting to observe a delete notification for the watched object Jan 13 15:01:19.262: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3486 c0960243-2815-4e2f-9613-d0dbf1d537f3 10707 0 2023-01-13 15:01:19 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-13 15:01:19 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 15:01:19.263: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3486 c0960243-2815-4e2f-9613-d0dbf1d537f3 10708 0 2023-01-13 15:01:19 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-13 15:01:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 15:01:19.263: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3486 c0960243-2815-4e2f-9613-d0dbf1d537f3 10709 0 2023-01-13 15:01:19 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-13 15:01:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: Expecting not to observe a notification because the object no longer meets the selector's requirements �[1mSTEP�[0m: changing the label value of the configmap back �[1mSTEP�[0m: modifying the configmap a third time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe an add notification for the watched object when the label value was restored Jan 13 15:01:29.296: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3486 c0960243-2815-4e2f-9613-d0dbf1d537f3 10732 0 2023-01-13 15:01:19 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-13 15:01:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 15:01:29.297: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3486 c0960243-2815-4e2f-9613-d0dbf1d537f3 10733 0 2023-01-13 15:01:19 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-13 15:01:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 15:01:29.297: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-3486 c0960243-2815-4e2f-9613-d0dbf1d537f3 10734 0 2023-01-13 15:01:19 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-13 15:01:19 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:29.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-3486" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":10,"skipped":150,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":9,"skipped":244,"failed":2,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:54:22.463: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating all guestbook components Jan 13 14:54:22.504: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 13 14:54:22.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 create -f -' Jan 13 14:54:23.599: INFO: stderr: "" Jan 13 14:54:23.599: INFO: stdout: "service/agnhost-replica created\n" Jan 13 14:54:23.599: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 13 14:54:23.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 create -f -' Jan 13 14:54:23.863: INFO: stderr: "" Jan 13 14:54:23.863: INFO: stdout: "service/agnhost-primary created\n" Jan 13 14:54:23.863: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 13 14:54:23.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 create -f -' Jan 13 14:54:24.114: INFO: stderr: "" Jan 13 14:54:24.114: INFO: stdout: "service/frontend created\n" Jan 13 14:54:24.114: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 13 14:54:24.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 create -f -' Jan 13 14:54:24.341: INFO: stderr: "" Jan 13 14:54:24.341: INFO: stdout: "deployment.apps/frontend created\n" Jan 13 14:54:24.341: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 14:54:24.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 create -f -' Jan 13 14:54:24.632: INFO: stderr: "" Jan 13 14:54:24.632: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 13 14:54:24.632: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.21 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 13 14:54:24.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 create -f -' Jan 13 14:54:24.915: INFO: stderr: "" Jan 13 14:54:24.915: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 13 14:54:24.915: INFO: Waiting for all frontend pods to be Running. Jan 13 14:54:29.965: INFO: Waiting for frontend to serve content. Jan 13 14:58:03.987: INFO: Failed to get response from guestbook. err: an error on the server ("unknown") has prevented the request from succeeding (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:37092->192.168.2.22:80: read: connection reset by peer"�0����"� Jan 13 14:58:08.996: INFO: Trying to add a new entry to the guestbook. Jan 13 14:58:09.004: INFO: Verifying that added entry can be retrieved. Jan 13 15:01:43.119: INFO: Failed to get response from guestbook. err: an error on the server ("unknown") has prevented the request from succeeding (get services frontend), response: k8s� � �v1��Status��� � �������Failure�ierror trying to reach service: read tcp 172.18.0.9:38134->192.168.2.22:80: read: connection reset by peer"�0����"� Jan 13 15:01:48.119: FAIL: Entry to guestbook wasn't correctly added in 180 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.7.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 +0x159 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0022ab200, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 �[1mSTEP�[0m: using delete to clean up resources Jan 13 15:01:48.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 delete --grace-period=0 --force -f -' Jan 13 15:01:48.229: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 15:01:48.229: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 15:01:48.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 delete --grace-period=0 --force -f -' Jan 13 15:01:48.360: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 15:01:48.360: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 15:01:48.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 delete --grace-period=0 --force -f -' Jan 13 15:01:48.485: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 15:01:48.485: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 15:01:48.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 delete --grace-period=0 --force -f -' Jan 13 15:01:48.581: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 15:01:48.581: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 15:01:48.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 delete --grace-period=0 --force -f -' Jan 13 15:01:48.684: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 15:01:48.684: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 13 15:01:48.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4740 delete --grace-period=0 --force -f -' Jan 13 15:01:48.827: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 15:01:48.828: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:48.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4740" for this suite. �[91m�[1m• Failure [446.394 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Guestbook application �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:342�[0m �[91m�[1mshould create and stop a working application [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 15:01:48.119: Entry to guestbook wasn't correctly added in 180 seconds.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:378 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":9,"skipped":244,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:48.950: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 13 15:01:48.999: INFO: Waiting up to 5m0s for pod "pod-842f2d73-9a0a-451a-9209-887b5be596bb" in namespace "emptydir-7774" to be "Succeeded or Failed" Jan 13 15:01:49.002: INFO: Pod "pod-842f2d73-9a0a-451a-9209-887b5be596bb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134969ms Jan 13 15:01:51.006: INFO: Pod "pod-842f2d73-9a0a-451a-9209-887b5be596bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007053246s �[1mSTEP�[0m: Saw pod success Jan 13 15:01:51.006: INFO: Pod "pod-842f2d73-9a0a-451a-9209-887b5be596bb" satisfied condition "Succeeded or Failed" Jan 13 15:01:51.009: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-842f2d73-9a0a-451a-9209-887b5be596bb container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:01:51.036: INFO: Waiting for pod pod-842f2d73-9a0a-451a-9209-887b5be596bb to disappear Jan 13 15:01:51.039: INFO: Pod pod-842f2d73-9a0a-451a-9209-887b5be596bb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:51.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7774" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":283,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:29.347: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-9118 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 13 15:01:29.374: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 13 15:01:29.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 13 15:01:31.427: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 15:01:33.427: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 15:01:35.426: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 15:01:37.426: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 15:01:39.426: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 15:01:41.426: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 15:01:43.427: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 13 15:01:45.426: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 13 15:01:45.432: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 13 15:01:47.435: INFO: The status of Pod netserver-1 is Running (Ready = false) Jan 13 15:01:49.438: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 13 15:01:49.449: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 13 15:01:49.457: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 13 15:01:51.511: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 13 15:01:51.511: INFO: Going to poll 192.168.1.31 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 13 15:01:51.514: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.1.31 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9118 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:01:51.514: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 15:01:52.580: INFO: Found all 1 expected endpoints: [netserver-0] Jan 13 15:01:52.580: INFO: Going to poll 192.168.0.94 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 13 15:01:52.583: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.94 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9118 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:01:52.583: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 15:01:53.634: INFO: Found all 1 expected endpoints: [netserver-1] Jan 13 15:01:53.634: INFO: Going to poll 192.168.2.34 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 13 15:01:53.637: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.34 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9118 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:01:53.637: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 15:01:54.708: INFO: Found all 1 expected endpoints: [netserver-2] Jan 13 15:01:54.708: INFO: Going to poll 192.168.6.32 on port 8081 at least 0 times, with a maximum of 46 tries before failing Jan 13 15:01:54.711: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.6.32 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9118 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:01:54.711: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 15:01:55.793: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:55.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-9118" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":180,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:55.820: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: create deployment with httpd image Jan 13 15:01:55.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7300 create -f -' Jan 13 15:01:56.639: INFO: stderr: "" Jan 13 15:01:56.639: INFO: stdout: "deployment.apps/httpd-deployment created\n" �[1mSTEP�[0m: verify diff finds difference between live and declared image Jan 13 15:01:56.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7300 diff -f -' Jan 13 15:01:57.045: INFO: rc: 1 Jan 13 15:01:57.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7300 delete -f -' Jan 13 15:01:57.152: INFO: stderr: "" Jan 13 15:01:57.152: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:57.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7300" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":12,"skipped":191,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:57.180: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-1e962d12-4bdf-4012-a8be-7c8f6480461d �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 15:01:57.217: INFO: Waiting up to 5m0s for pod "pod-configmaps-4fcfa50b-25d0-4842-8674-fdaab307c8a8" in namespace "configmap-7424" to be "Succeeded or Failed" Jan 13 15:01:57.220: INFO: Pod "pod-configmaps-4fcfa50b-25d0-4842-8674-fdaab307c8a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.892711ms Jan 13 15:01:59.223: INFO: Pod "pod-configmaps-4fcfa50b-25d0-4842-8674-fdaab307c8a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005630067s �[1mSTEP�[0m: Saw pod success Jan 13 15:01:59.223: INFO: Pod "pod-configmaps-4fcfa50b-25d0-4842-8674-fdaab307c8a8" satisfied condition "Succeeded or Failed" Jan 13 15:01:59.226: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-ceauut pod pod-configmaps-4fcfa50b-25d0-4842-8674-fdaab307c8a8 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:01:59.253: INFO: Waiting for pod pod-configmaps-4fcfa50b-25d0-4842-8674-fdaab307c8a8 to disappear Jan 13 15:01:59.256: INFO: Pod pod-configmaps-4fcfa50b-25d0-4842-8674-fdaab307c8a8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:01:59.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7424" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":197,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:51.055: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: starting an echo server on multiple ports �[1mSTEP�[0m: creating replication controller proxy-service-8d2w6 in namespace proxy-6585 I0113 15:01:51.113569 18 runners.go:190] Created replication controller with name: proxy-service-8d2w6, namespace: proxy-6585, replica count: 1 I0113 15:01:52.164086 18 runners.go:190] proxy-service-8d2w6 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0113 15:01:53.164483 18 runners.go:190] proxy-service-8d2w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 15:01:54.164742 18 runners.go:190] proxy-service-8d2w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 15:01:55.165056 18 runners.go:190] proxy-service-8d2w6 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0113 15:01:56.165338 18 runners.go:190] proxy-service-8d2w6 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 15:01:56.168: INFO: setup took 5.079216863s, starting test cases �[1mSTEP�[0m: running 16 cases, 20 attempts per case, 320 total attempts Jan 13 15:01:56.176: INFO: (0) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.304779ms) Jan 13 15:01:56.176: INFO: (0) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 8.16559ms) Jan 13 15:01:56.177: INFO: (0) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.564374ms) Jan 13 15:01:56.177: INFO: (0) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 8.827156ms) Jan 13 15:01:56.177: INFO: (0) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.691977ms) Jan 13 15:01:56.177: INFO: (0) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 9.18323ms) Jan 13 15:01:56.178: INFO: (0) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 9.715104ms) Jan 13 15:01:56.178: INFO: (0) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 10.128034ms) Jan 13 15:01:56.179: INFO: (0) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 10.34288ms) Jan 13 15:01:56.179: INFO: (0) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 10.535629ms) Jan 13 15:01:56.179: INFO: (0) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 10.853339ms) Jan 13 15:01:56.181: INFO: (0) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 12.97233ms) Jan 13 15:01:56.181: INFO: (0) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 13.330362ms) Jan 13 15:01:56.182: INFO: (0) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 13.133845ms) Jan 13 15:01:56.182: INFO: (0) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 13.898166ms) Jan 13 15:01:56.182: INFO: (0) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 14.042689ms) Jan 13 15:01:56.188: INFO: (1) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 4.729666ms) Jan 13 15:01:56.196: INFO: (1) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 12.447239ms) Jan 13 15:01:56.196: INFO: (1) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 12.989928ms) Jan 13 15:01:56.196: INFO: (1) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 12.88317ms) Jan 13 15:01:56.196: INFO: (1) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 12.779139ms) Jan 13 15:01:56.196: INFO: (1) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 12.482866ms) Jan 13 15:01:56.197: INFO: (1) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 14.96297ms) Jan 13 15:01:56.197: INFO: (1) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 14.25236ms) Jan 13 15:01:56.198: INFO: (1) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 14.076126ms) Jan 13 15:01:56.198: INFO: (1) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 15.134587ms) Jan 13 15:01:56.198: INFO: (1) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 14.609319ms) Jan 13 15:01:56.198: INFO: (1) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 14.565477ms) Jan 13 15:01:56.198: INFO: (1) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 15.653086ms) Jan 13 15:01:56.199: INFO: (1) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 15.922587ms) Jan 13 15:01:56.199: INFO: (1) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 16.294927ms) Jan 13 15:01:56.200: INFO: (1) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 16.465332ms) Jan 13 15:01:56.205: INFO: (2) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 4.829314ms) Jan 13 15:01:56.208: INFO: (2) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 8.480311ms) Jan 13 15:01:56.208: INFO: (2) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.55505ms) Jan 13 15:01:56.208: INFO: (2) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 8.510159ms) Jan 13 15:01:56.208: INFO: (2) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 8.7224ms) Jan 13 15:01:56.208: INFO: (2) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 8.66717ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 8.913083ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 8.994038ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 9.072007ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 9.134219ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.093142ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 9.050058ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 9.120955ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 9.14328ms) Jan 13 15:01:56.209: INFO: (2) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 9.179686ms) Jan 13 15:01:56.210: INFO: (2) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 9.945948ms) Jan 13 15:01:56.218: INFO: (3) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 8.295945ms) Jan 13 15:01:56.220: INFO: (3) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 10.075167ms) Jan 13 15:01:56.220: INFO: (3) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 10.092731ms) Jan 13 15:01:56.220: INFO: (3) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 10.359343ms) Jan 13 15:01:56.220: INFO: (3) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 10.533415ms) Jan 13 15:01:56.221: INFO: (3) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 10.619291ms) Jan 13 15:01:56.221: INFO: (3) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 11.048339ms) Jan 13 15:01:56.221: INFO: (3) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 10.908355ms) Jan 13 15:01:56.221: INFO: (3) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 11.254083ms) Jan 13 15:01:56.222: INFO: (3) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 12.129757ms) Jan 13 15:01:56.222: INFO: (3) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 12.361521ms) Jan 13 15:01:56.223: INFO: (3) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 12.651438ms) Jan 13 15:01:56.223: INFO: (3) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 12.703811ms) Jan 13 15:01:56.223: INFO: (3) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 13.190317ms) Jan 13 15:01:56.223: INFO: (3) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 13.523349ms) Jan 13 15:01:56.224: INFO: (3) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 13.558709ms) Jan 13 15:01:56.231: INFO: (4) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 7.487003ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.713859ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 7.70754ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 8.11282ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 8.025728ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.305284ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 8.261015ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 8.215765ms) Jan 13 15:01:56.232: INFO: (4) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 8.292159ms) Jan 13 15:01:56.233: INFO: (4) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 8.967644ms) Jan 13 15:01:56.233: INFO: (4) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.577035ms) Jan 13 15:01:56.234: INFO: (4) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 9.542915ms) Jan 13 15:01:56.235: INFO: (4) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 10.930346ms) Jan 13 15:01:56.235: INFO: (4) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 11.176322ms) Jan 13 15:01:56.235: INFO: (4) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 11.148561ms) Jan 13 15:01:56.235: INFO: (4) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 11.026446ms) Jan 13 15:01:56.241: INFO: (5) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 5.764837ms) Jan 13 15:01:56.241: INFO: (5) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 5.487597ms) Jan 13 15:01:56.241: INFO: (5) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 5.166132ms) Jan 13 15:01:56.243: INFO: (5) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.167448ms) Jan 13 15:01:56.244: INFO: (5) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 8.397248ms) Jan 13 15:01:56.245: INFO: (5) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 9.206302ms) Jan 13 15:01:56.245: INFO: (5) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 9.422141ms) Jan 13 15:01:56.245: INFO: (5) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 10.16729ms) Jan 13 15:01:56.245: INFO: (5) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 9.819004ms) Jan 13 15:01:56.245: INFO: (5) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 9.779262ms) Jan 13 15:01:56.246: INFO: (5) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 10.251748ms) Jan 13 15:01:56.246: INFO: (5) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 9.930228ms) Jan 13 15:01:56.246: INFO: (5) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 10.172225ms) Jan 13 15:01:56.246: INFO: (5) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 10.858207ms) Jan 13 15:01:56.247: INFO: (5) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 11.113779ms) Jan 13 15:01:56.247: INFO: (5) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 10.773424ms) Jan 13 15:01:56.252: INFO: (6) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 5.590335ms) Jan 13 15:01:56.253: INFO: (6) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 5.652717ms) Jan 13 15:01:56.252: INFO: (6) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 5.524042ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.747298ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 8.831843ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 8.880304ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.956683ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 8.928712ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 9.037582ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 9.041568ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 8.97472ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 8.991437ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 9.139974ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 9.116638ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 9.172444ms) Jan 13 15:01:56.256: INFO: (6) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 9.157666ms) Jan 13 15:01:56.261: INFO: (7) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 5.108392ms) Jan 13 15:01:56.262: INFO: (7) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 5.383812ms) Jan 13 15:01:56.262: INFO: (7) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 5.130272ms) Jan 13 15:01:56.263: INFO: (7) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 5.670297ms) Jan 13 15:01:56.263: INFO: (7) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 6.171622ms) Jan 13 15:01:56.263: INFO: (7) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 5.747677ms) Jan 13 15:01:56.265: INFO: (7) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.154582ms) Jan 13 15:01:56.265: INFO: (7) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 8.501587ms) Jan 13 15:01:56.266: INFO: (7) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 8.972843ms) Jan 13 15:01:56.266: INFO: (7) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 10.188902ms) Jan 13 15:01:56.266: INFO: (7) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 9.639888ms) Jan 13 15:01:56.266: INFO: (7) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 9.580534ms) Jan 13 15:01:56.268: INFO: (7) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 11.511893ms) Jan 13 15:01:56.268: INFO: (7) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 11.298562ms) Jan 13 15:01:56.268: INFO: (7) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 11.586935ms) Jan 13 15:01:56.268: INFO: (7) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 11.82151ms) Jan 13 15:01:56.276: INFO: (8) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 7.679697ms) Jan 13 15:01:56.276: INFO: (8) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 7.8292ms) Jan 13 15:01:56.277: INFO: (8) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.917843ms) Jan 13 15:01:56.277: INFO: (8) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 7.846668ms) Jan 13 15:01:56.277: INFO: (8) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 7.809545ms) Jan 13 15:01:56.277: INFO: (8) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 7.693961ms) Jan 13 15:01:56.277: INFO: (8) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 7.921835ms) Jan 13 15:01:56.277: INFO: (8) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 7.985503ms) Jan 13 15:01:56.277: INFO: (8) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 8.799324ms) Jan 13 15:01:56.279: INFO: (8) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 10.447361ms) Jan 13 15:01:56.280: INFO: (8) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 10.978923ms) Jan 13 15:01:56.280: INFO: (8) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 11.298669ms) Jan 13 15:01:56.280: INFO: (8) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 11.600494ms) Jan 13 15:01:56.280: INFO: (8) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 11.497676ms) Jan 13 15:01:56.280: INFO: (8) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 11.739385ms) Jan 13 15:01:56.281: INFO: (8) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 11.835807ms) Jan 13 15:01:56.291: INFO: (9) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 10.06073ms) Jan 13 15:01:56.291: INFO: (9) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 9.926199ms) Jan 13 15:01:56.291: INFO: (9) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.930936ms) Jan 13 15:01:56.292: INFO: (9) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 10.329071ms) Jan 13 15:01:56.292: INFO: (9) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 11.473676ms) Jan 13 15:01:56.293: INFO: (9) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 11.967064ms) Jan 13 15:01:56.293: INFO: (9) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 12.070758ms) Jan 13 15:01:56.294: INFO: (9) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 12.441723ms) Jan 13 15:01:56.294: INFO: (9) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 12.303907ms) Jan 13 15:01:56.295: INFO: (9) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 14.510351ms) Jan 13 15:01:56.295: INFO: (9) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 14.277777ms) Jan 13 15:01:56.295: INFO: (9) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 14.252687ms) Jan 13 15:01:56.296: INFO: (9) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 14.503337ms) Jan 13 15:01:56.296: INFO: (9) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 14.460928ms) Jan 13 15:01:56.296: INFO: (9) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 14.903243ms) Jan 13 15:01:56.296: INFO: (9) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 15.42609ms) Jan 13 15:01:56.303: INFO: (10) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 6.637795ms) Jan 13 15:01:56.303: INFO: (10) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 6.944455ms) Jan 13 15:01:56.303: INFO: (10) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 7.107575ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 7.595783ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 7.533561ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 7.734025ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 7.564425ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 7.624068ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 7.55236ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 7.5419ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.569313ms) Jan 13 15:01:56.304: INFO: (10) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 7.720206ms) Jan 13 15:01:56.305: INFO: (10) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 9.170538ms) Jan 13 15:01:56.306: INFO: (10) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 9.072753ms) Jan 13 15:01:56.306: INFO: (10) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 9.408796ms) Jan 13 15:01:56.306: INFO: (10) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 9.471372ms) Jan 13 15:01:56.311: INFO: (11) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 5.352837ms) Jan 13 15:01:56.320: INFO: (11) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 14.348091ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 14.45532ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 14.417071ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 15.166744ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 15.226314ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 15.15563ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 15.22888ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 15.191604ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 15.245725ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 15.201128ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 15.103507ms) Jan 13 15:01:56.321: INFO: (11) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 15.178688ms) Jan 13 15:01:56.323: INFO: (11) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 17.007091ms) Jan 13 15:01:56.323: INFO: (11) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 17.163882ms) Jan 13 15:01:56.323: INFO: (11) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 17.15775ms) Jan 13 15:01:56.331: INFO: (12) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 7.191653ms) Jan 13 15:01:56.331: INFO: (12) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 7.246532ms) Jan 13 15:01:56.331: INFO: (12) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 7.528673ms) Jan 13 15:01:56.331: INFO: (12) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.680552ms) Jan 13 15:01:56.331: INFO: (12) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 7.766616ms) Jan 13 15:01:56.331: INFO: (12) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.885791ms) Jan 13 15:01:56.331: INFO: (12) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 8.263814ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.067253ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 8.105439ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 8.625346ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 8.5003ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 8.553349ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 8.55317ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 8.519919ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 8.469627ms) Jan 13 15:01:56.332: INFO: (12) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 8.935961ms) Jan 13 15:01:56.336: INFO: (13) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 3.838362ms) Jan 13 15:01:56.340: INFO: (13) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.086203ms) Jan 13 15:01:56.340: INFO: (13) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 7.303694ms) Jan 13 15:01:56.340: INFO: (13) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 7.29542ms) Jan 13 15:01:56.340: INFO: (13) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 7.388049ms) Jan 13 15:01:56.340: INFO: (13) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 7.407284ms) Jan 13 15:01:56.340: INFO: (13) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 7.451596ms) Jan 13 15:01:56.340: INFO: (13) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 7.530677ms) Jan 13 15:01:56.341: INFO: (13) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 7.831146ms) Jan 13 15:01:56.341: INFO: (13) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 7.929233ms) Jan 13 15:01:56.345: INFO: (13) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 11.728217ms) Jan 13 15:01:56.345: INFO: (13) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 11.662865ms) Jan 13 15:01:56.345: INFO: (13) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 11.842656ms) Jan 13 15:01:56.345: INFO: (13) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 11.813024ms) Jan 13 15:01:56.345: INFO: (13) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 11.98083ms) Jan 13 15:01:56.345: INFO: (13) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 11.96011ms) Jan 13 15:01:56.350: INFO: (14) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 4.932245ms) Jan 13 15:01:56.352: INFO: (14) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 6.190976ms) Jan 13 15:01:56.352: INFO: (14) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 7.386003ms) Jan 13 15:01:56.353: INFO: (14) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 8.307086ms) Jan 13 15:01:56.353: INFO: (14) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 7.896345ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 10.00174ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 10.271043ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 10.27704ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 10.123141ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 10.009998ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 9.956521ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 10.007772ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 10.09925ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 10.569437ms) Jan 13 15:01:56.355: INFO: (14) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 10.165826ms) Jan 13 15:01:56.356: INFO: (14) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 11.134425ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.85315ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 8.818673ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 8.962567ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 8.626269ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 8.80812ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 8.859862ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 8.799961ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 8.920432ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 8.731308ms) Jan 13 15:01:56.365: INFO: (15) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 8.730444ms) Jan 13 15:01:56.367: INFO: (15) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 10.882723ms) Jan 13 15:01:56.367: INFO: (15) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 10.899985ms) Jan 13 15:01:56.367: INFO: (15) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 10.908349ms) Jan 13 15:01:56.367: INFO: (15) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 10.96868ms) Jan 13 15:01:56.367: INFO: (15) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 10.787646ms) Jan 13 15:01:56.367: INFO: (15) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 11.323267ms) Jan 13 15:01:56.376: INFO: (16) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 8.540869ms) Jan 13 15:01:56.378: INFO: (16) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 10.05493ms) Jan 13 15:01:56.378: INFO: (16) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 10.371571ms) Jan 13 15:01:56.378: INFO: (16) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 10.326584ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 11.506388ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 11.703419ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 11.629879ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 11.564449ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 11.829348ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 11.810172ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 11.726046ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 11.714576ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 11.76942ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 11.808481ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 11.864063ms) Jan 13 15:01:56.379: INFO: (16) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 11.844255ms) Jan 13 15:01:56.384: INFO: (17) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 4.373864ms) Jan 13 15:01:56.384: INFO: (17) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 4.440198ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 9.360815ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 9.446859ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 9.414151ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 9.611333ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 9.613005ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 9.548599ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.719019ms) Jan 13 15:01:56.389: INFO: (17) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 9.698657ms) Jan 13 15:01:56.391: INFO: (17) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 11.652857ms) Jan 13 15:01:56.391: INFO: (17) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 11.663852ms) Jan 13 15:01:56.391: INFO: (17) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 11.59302ms) Jan 13 15:01:56.391: INFO: (17) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 11.578569ms) Jan 13 15:01:56.393: INFO: (17) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 13.84629ms) Jan 13 15:01:56.394: INFO: (17) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 13.959678ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 9.386323ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 9.123443ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 9.20386ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 8.891507ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 8.785538ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 9.018232ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 9.087627ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 8.948443ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 9.311308ms) Jan 13 15:01:56.403: INFO: (18) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 9.107528ms) Jan 13 15:01:56.404: INFO: (18) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.805947ms) Jan 13 15:01:56.404: INFO: (18) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 10.067154ms) Jan 13 15:01:56.404: INFO: (18) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 9.738853ms) Jan 13 15:01:56.404: INFO: (18) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 9.989894ms) Jan 13 15:01:56.404: INFO: (18) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.615768ms) Jan 13 15:01:56.404: INFO: (18) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 9.664142ms) Jan 13 15:01:56.414: INFO: (19) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.103702ms) Jan 13 15:01:56.414: INFO: (19) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz/proxy/rewriteme">test</a> (200; 9.393278ms) Jan 13 15:01:56.414: INFO: (19) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">test<... (200; 9.981301ms) Jan 13 15:01:56.414: INFO: (19) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:1080/proxy/rewriteme">... (200; 9.775842ms) Jan 13 15:01:56.414: INFO: (19) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:162/proxy/: bar (200; 9.984142ms) Jan 13 15:01:56.414: INFO: (19) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:462/proxy/: tls qux (200; 10.476894ms) Jan 13 15:01:56.414: INFO: (19) /api/v1/namespaces/proxy-6585/pods/proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 10.30806ms) Jan 13 15:01:56.415: INFO: (19) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/: <a href="/api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:443/proxy/tlsrewritem... (200; 10.576038ms) Jan 13 15:01:56.415: INFO: (19) /api/v1/namespaces/proxy-6585/pods/http:proxy-service-8d2w6-8n7zz:160/proxy/: foo (200; 10.683115ms) Jan 13 15:01:56.415: INFO: (19) /api/v1/namespaces/proxy-6585/pods/https:proxy-service-8d2w6-8n7zz:460/proxy/: tls baz (200; 10.547181ms) Jan 13 15:01:56.417: INFO: (19) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname1/proxy/: foo (200; 12.509968ms) Jan 13 15:01:56.417: INFO: (19) /api/v1/namespaces/proxy-6585/services/proxy-service-8d2w6:portname2/proxy/: bar (200; 12.993312ms) Jan 13 15:01:56.418: INFO: (19) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname1/proxy/: foo (200; 13.265851ms) Jan 13 15:01:56.418: INFO: (19) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname2/proxy/: tls qux (200; 13.312286ms) Jan 13 15:01:56.419: INFO: (19) /api/v1/namespaces/proxy-6585/services/http:proxy-service-8d2w6:portname2/proxy/: bar (200; 14.754974ms) Jan 13 15:01:56.419: INFO: (19) /api/v1/namespaces/proxy-6585/services/https:proxy-service-8d2w6:tlsportname1/proxy/: tls baz (200; 14.883918ms) �[1mSTEP�[0m: deleting ReplicationController proxy-service-8d2w6 in namespace proxy-6585, will wait for the garbage collector to delete the pods Jan 13 15:01:56.480: INFO: Deleting ReplicationController proxy-service-8d2w6 took: 7.409219ms Jan 13 15:01:56.581: INFO: Terminating ReplicationController proxy-service-8d2w6 pods took: 100.360819ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:02.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-6585" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":11,"skipped":286,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:02.248: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating Agnhost RC Jan 13 15:02:02.272: INFO: namespace kubectl-4902 Jan 13 15:02:02.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4902 create -f -' Jan 13 15:02:02.534: INFO: stderr: "" Jan 13 15:02:02.534: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 13 15:02:03.537: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 15:02:03.538: INFO: Found 1 / 1 Jan 13 15:02:03.538: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 13 15:02:03.541: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 15:02:03.541: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 15:02:03.541: INFO: wait on agnhost-primary startup in kubectl-4902 Jan 13 15:02:03.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4902 logs agnhost-primary-s7ts7 agnhost-primary' Jan 13 15:02:03.655: INFO: stderr: "" Jan 13 15:02:03.655: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 13 15:02:03.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4902 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 13 15:02:03.798: INFO: stderr: "" Jan 13 15:02:03.798: INFO: stdout: "service/rm2 exposed\n" Jan 13 15:02:03.802: INFO: Service rm2 in namespace kubectl-4902 found. �[1mSTEP�[0m: exposing service Jan 13 15:02:05.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4902 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 13 15:02:05.911: INFO: stderr: "" Jan 13 15:02:05.911: INFO: stdout: "service/rm3 exposed\n" Jan 13 15:02:05.914: INFO: Service rm3 in namespace kubectl-4902 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:07.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4902" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":12,"skipped":328,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:07.929: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:02:09.965: INFO: Deleting pod "var-expansion-1c5264cc-bf7f-4805-aaa8-501215ad3f78" in namespace "var-expansion-9375" Jan 13 15:02:09.970: INFO: Wait up to 5m0s for pod "var-expansion-1c5264cc-bf7f-4805-aaa8-501215ad3f78" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:11.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-9375" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][Slow] [Conformance]","total":-1,"completed":13,"skipped":328,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:01:59.267: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1392 �[1mSTEP�[0m: creating an pod Jan 13 15:01:59.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.21 --restart=Never -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 13 15:01:59.404: INFO: stderr: "" Jan 13 15:01:59.404: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Waiting for log generator to start. Jan 13 15:01:59.404: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 13 15:01:59.404: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8317" to be "running and ready, or succeeded" Jan 13 15:01:59.407: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.836274ms Jan 13 15:02:01.411: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.006815558s Jan 13 15:02:01.411: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 13 15:02:01.411: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 13 15:02:01.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 logs logs-generator logs-generator' Jan 13 15:02:01.519: INFO: stderr: "" Jan 13 15:02:01.519: INFO: stdout: "I0113 15:02:00.061325 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/pvc8 595\nI0113 15:02:00.261422 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/6s22 507\nI0113 15:02:00.461568 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/xbd6 586\nI0113 15:02:00.661496 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/bxc 497\nI0113 15:02:00.861410 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/gjs 432\nI0113 15:02:01.061485 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/cwgg 431\nI0113 15:02:01.261535 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/jpqk 274\nI0113 15:02:01.461562 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/8qv5 268\n" �[1mSTEP�[0m: limiting log lines Jan 13 15:02:01.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 logs logs-generator logs-generator --tail=1' Jan 13 15:02:01.618: INFO: stderr: "" Jan 13 15:02:01.618: INFO: stdout: "I0113 15:02:01.461562 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/8qv5 268\n" Jan 13 15:02:01.618: INFO: got output "I0113 15:02:01.461562 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/8qv5 268\n" �[1mSTEP�[0m: limiting log bytes Jan 13 15:02:01.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 logs logs-generator logs-generator --limit-bytes=1' Jan 13 15:02:01.726: INFO: stderr: "" Jan 13 15:02:01.726: INFO: stdout: "I" Jan 13 15:02:01.726: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Jan 13 15:02:01.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 logs logs-generator logs-generator --tail=1 --timestamps' Jan 13 15:02:01.827: INFO: stderr: "" Jan 13 15:02:01.827: INFO: stdout: "2023-01-13T15:02:01.661769764Z I0113 15:02:01.661498 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/g64s 473\n" Jan 13 15:02:01.827: INFO: got output "2023-01-13T15:02:01.661769764Z I0113 15:02:01.661498 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/g64s 473\n" �[1mSTEP�[0m: restricting to a time range Jan 13 15:02:04.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 logs logs-generator logs-generator --since=1s' Jan 13 15:02:04.440: INFO: stderr: "" Jan 13 15:02:04.440: INFO: stdout: "I0113 15:02:03.461532 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/8bx4 365\nI0113 15:02:03.661527 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/vsc 310\nI0113 15:02:03.861573 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/sttn 268\nI0113 15:02:04.061575 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/2s7m 304\nI0113 15:02:04.261562 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/bqvg 263\n" Jan 13 15:02:04.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 logs logs-generator logs-generator --since=24h' Jan 13 15:02:04.548: INFO: stderr: "" Jan 13 15:02:04.548: INFO: stdout: "I0113 15:02:00.061325 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/pvc8 595\nI0113 15:02:00.261422 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/6s22 507\nI0113 15:02:00.461568 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/xbd6 586\nI0113 15:02:00.661496 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/bxc 497\nI0113 15:02:00.861410 1 logs_generator.go:76] 4 POST /api/v1/namespaces/default/pods/gjs 432\nI0113 15:02:01.061485 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/cwgg 431\nI0113 15:02:01.261535 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/jpqk 274\nI0113 15:02:01.461562 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/8qv5 268\nI0113 15:02:01.661498 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/g64s 473\nI0113 15:02:01.861590 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/7mf 520\nI0113 15:02:02.061499 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/6fm 545\nI0113 15:02:02.261515 1 logs_generator.go:76] 11 GET /api/v1/namespaces/ns/pods/7dz 560\nI0113 15:02:02.461533 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/96lv 438\nI0113 15:02:02.661583 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/xvq8 544\nI0113 15:02:02.861515 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/hvtl 503\nI0113 15:02:03.061454 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/c2qw 317\nI0113 15:02:03.261389 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/9hk 262\nI0113 15:02:03.461532 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/8bx4 365\nI0113 15:02:03.661527 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/vsc 310\nI0113 15:02:03.861573 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/sttn 268\nI0113 15:02:04.061575 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/2s7m 304\nI0113 15:02:04.261562 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/bqvg 263\nI0113 15:02:04.461521 1 logs_generator.go:76] 22 GET /api/v1/namespaces/default/pods/q724 245\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 Jan 13 15:02:04.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8317 delete pod logs-generator' Jan 13 15:02:12.567: INFO: stderr: "" Jan 13 15:02:12.568: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:12.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8317" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":14,"skipped":198,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:12.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:02:12.034: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:14.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7864" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":343,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:12.641: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-378bfe6c-450d-440f-bf1b-a5bfa32208e5 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 15:02:12.685: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd5babd9-00e5-407b-8c31-8ed0cdf4f222" in namespace "projected-8985" to be "Succeeded or Failed" Jan 13 15:02:12.689: INFO: Pod "pod-projected-secrets-cd5babd9-00e5-407b-8c31-8ed0cdf4f222": Phase="Pending", Reason="", readiness=false. Elapsed: 3.601621ms Jan 13 15:02:14.692: INFO: Pod "pod-projected-secrets-cd5babd9-00e5-407b-8c31-8ed0cdf4f222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007493933s �[1mSTEP�[0m: Saw pod success Jan 13 15:02:14.692: INFO: Pod "pod-projected-secrets-cd5babd9-00e5-407b-8c31-8ed0cdf4f222" satisfied condition "Succeeded or Failed" Jan 13 15:02:14.696: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-projected-secrets-cd5babd9-00e5-407b-8c31-8ed0cdf4f222 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:02:14.715: INFO: Waiting for pod pod-projected-secrets-cd5babd9-00e5-407b-8c31-8ed0cdf4f222 to disappear Jan 13 15:02:14.718: INFO: Pod pod-projected-secrets-cd5babd9-00e5-407b-8c31-8ed0cdf4f222 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:14.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8985" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":234,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:14.744: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Jan 13 15:02:14.791: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7395 79fb25fe-51d5-4b45-a266-0abbbb88e6f8 11285 0 2023-01-13 15:02:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-13 15:02:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 15:02:14.791: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7395 79fb25fe-51d5-4b45-a266-0abbbb88e6f8 11286 0 2023-01-13 15:02:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-13 15:02:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 13 15:02:14.814: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7395 79fb25fe-51d5-4b45-a266-0abbbb88e6f8 11288 0 2023-01-13 15:02:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-13 15:02:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 13 15:02:14.814: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-7395 79fb25fe-51d5-4b45-a266-0abbbb88e6f8 11290 0 2023-01-13 15:02:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-13 15:02:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:14.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-7395" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":16,"skipped":242,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:14.115: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Jan 13 15:02:14.155: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-6448 327dd968-f6c0-4b8e-b9ba-2eae0a99c572 11269 0 2023-01-13 15:02:14 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2023-01-13 15:02:14 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dccv2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dccv2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dccv2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 15:02:14.162: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Jan 13 15:02:16.166: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Jan 13 15:02:16.167: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6448 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:02:16.167: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Jan 13 15:02:16.248: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6448 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:02:16.249: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 15:02:16.357: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:16.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6448" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":15,"skipped":368,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:16.407: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 13 15:02:16.460: INFO: Waiting up to 5m0s for pod "pod-dc9aa4de-f831-421b-93b1-c05f07a94ec9" in namespace "emptydir-5360" to be "Succeeded or Failed" Jan 13 15:02:16.463: INFO: Pod "pod-dc9aa4de-f831-421b-93b1-c05f07a94ec9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.436939ms Jan 13 15:02:18.471: INFO: Pod "pod-dc9aa4de-f831-421b-93b1-c05f07a94ec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010611534s �[1mSTEP�[0m: Saw pod success Jan 13 15:02:18.471: INFO: Pod "pod-dc9aa4de-f831-421b-93b1-c05f07a94ec9" satisfied condition "Succeeded or Failed" Jan 13 15:02:18.474: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-dc9aa4de-f831-421b-93b1-c05f07a94ec9 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:02:18.493: INFO: Waiting for pod pod-dc9aa4de-f831-421b-93b1-c05f07a94ec9 to disappear Jan 13 15:02:18.496: INFO: Pod pod-dc9aa4de-f831-421b-93b1-c05f07a94ec9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:18.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5360" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":385,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:14.864: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating the pod Jan 13 15:02:17.423: INFO: Successfully updated pod "annotationupdate832ab08b-a435-450b-a303-ba5dad972bfb" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:19.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4663" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":271,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:18.523: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service endpoint-test2 in namespace services-1021 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-1021 to expose endpoints map[] Jan 13 15:02:18.573: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Jan 13 15:02:19.583: INFO: successfully validated that service endpoint-test2 in namespace services-1021 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-1021 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-1021 to expose endpoints map[pod1:[80]] Jan 13 15:02:21.603: INFO: successfully validated that service endpoint-test2 in namespace services-1021 exposes endpoints map[pod1:[80]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-1021 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-1021 to expose endpoints map[pod1:[80] pod2:[80]] Jan 13 15:02:22.625: INFO: successfully validated that service endpoint-test2 in namespace services-1021 exposes endpoints map[pod1:[80] pod2:[80]] �[1mSTEP�[0m: Deleting pod pod1 in namespace services-1021 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-1021 to expose endpoints map[pod2:[80]] Jan 13 15:02:22.664: INFO: successfully validated that service endpoint-test2 in namespace services-1021 exposes endpoints map[pod2:[80]] �[1mSTEP�[0m: Deleting pod pod2 in namespace services-1021 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-1021 to expose endpoints map[] Jan 13 15:02:22.690: INFO: successfully validated that service endpoint-test2 in namespace services-1021 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:22.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1021" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":17,"skipped":398,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:19.457: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e Jan 13 15:02:19.493: INFO: Pod name my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e: Found 0 pods out of 1 Jan 13 15:02:24.496: INFO: Pod name my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e: Found 1 pods out of 1 Jan 13 15:02:24.496: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e" are running Jan 13 15:02:24.498: INFO: Pod "my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e-vh9xt" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 15:02:19 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 15:02:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 15:02:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-13 15:02:19 +0000 UTC Reason: Message:}]) Jan 13 15:02:24.499: INFO: Trying to dial the pod Jan 13 15:02:29.509: INFO: Controller my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e: Got expected result from replica 1 [my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e-vh9xt]: "my-hostname-basic-02005916-c989-478f-8c82-315182ec6c4e-vh9xt", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:29.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-7929" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":18,"skipped":279,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:29.524: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating secret with name secret-test-2ad2581b-1c1e-4076-907b-199353d7d969 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 15:02:29.563: INFO: Waiting up to 5m0s for pod "pod-secrets-57fe68e2-e297-4d3f-99ca-9246a3c38c44" in namespace "secrets-8020" to be "Succeeded or Failed" Jan 13 15:02:29.566: INFO: Pod "pod-secrets-57fe68e2-e297-4d3f-99ca-9246a3c38c44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657953ms Jan 13 15:02:31.569: INFO: Pod "pod-secrets-57fe68e2-e297-4d3f-99ca-9246a3c38c44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005899593s �[1mSTEP�[0m: Saw pod success Jan 13 15:02:31.570: INFO: Pod "pod-secrets-57fe68e2-e297-4d3f-99ca-9246a3c38c44" satisfied condition "Succeeded or Failed" Jan 13 15:02:31.572: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod pod-secrets-57fe68e2-e297-4d3f-99ca-9246a3c38c44 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:02:31.588: INFO: Waiting for pod pod-secrets-57fe68e2-e297-4d3f-99ca-9246a3c38c44 to disappear Jan 13 15:02:31.591: INFO: Pod pod-secrets-57fe68e2-e297-4d3f-99ca-9246a3c38c44 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:31.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8020" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":282,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:22.826: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-3705 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-3705 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-3705 I0113 15:02:22.900302 18 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3705, replica count: 2 I0113 15:02:25.950824 18 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Jan 13 15:02:25.974: INFO: Creating new exec pod Jan 13 15:02:27.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3705 exec execpod82x8m -- /bin/sh -x -c nslookup nodeport-service.services-3705.svc.cluster.local' Jan 13 15:02:28.175: INFO: stderr: "+ nslookup nodeport-service.services-3705.svc.cluster.local\n" Jan 13 15:02:28.175: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-3705.svc.cluster.local\tcanonical name = externalsvc.services-3705.svc.cluster.local.\nName:\texternalsvc.services-3705.svc.cluster.local\nAddress: 10.133.229.189\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-3705, will wait for the garbage collector to delete the pods Jan 13 15:02:28.233: INFO: Deleting ReplicationController externalsvc took: 4.987145ms Jan 13 15:02:28.334: INFO: Terminating ReplicationController externalsvc pods took: 100.2596ms Jan 13 15:02:42.755: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:42.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3705" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":18,"skipped":443,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:42.801: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 13 15:02:42.850: INFO: Waiting up to 5m0s for pod "pod-b7afa11d-d47e-49e5-aad8-d891adca9519" in namespace "emptydir-5888" to be "Succeeded or Failed" Jan 13 15:02:42.855: INFO: Pod "pod-b7afa11d-d47e-49e5-aad8-d891adca9519": Phase="Pending", Reason="", readiness=false. Elapsed: 5.221883ms Jan 13 15:02:44.859: INFO: Pod "pod-b7afa11d-d47e-49e5-aad8-d891adca9519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009144112s �[1mSTEP�[0m: Saw pod success Jan 13 15:02:44.859: INFO: Pod "pod-b7afa11d-d47e-49e5-aad8-d891adca9519" satisfied condition "Succeeded or Failed" Jan 13 15:02:44.862: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod pod-b7afa11d-d47e-49e5-aad8-d891adca9519 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:02:44.878: INFO: Waiting for pod pod-b7afa11d-d47e-49e5-aad8-d891adca9519 to disappear Jan 13 15:02:44.880: INFO: Pod pod-b7afa11d-d47e-49e5-aad8-d891adca9519 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:44.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":445,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:44.915: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:02:44.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2193" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":20,"skipped":462,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:31.618: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:103 �[1mSTEP�[0m: Creating service test in namespace statefulset-6083 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-6083 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-6083 Jan 13 15:02:31.662: INFO: Found 0 stateful pods, waiting for 1 Jan 13 15:02:41.665: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 13 15:02:41.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 15:02:41.856: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 15:02:41.857: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 15:02:41.857: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 15:02:41.860: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 13 15:02:51.864: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 15:02:51.864: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 15:02:51.882: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999508s Jan 13 15:02:52.886: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993287901s Jan 13 15:02:53.889: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989440098s Jan 13 15:02:54.893: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986137558s Jan 13 15:02:55.898: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981829922s Jan 13 15:02:56.902: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.977365611s Jan 13 15:02:57.906: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.973121023s Jan 13 15:02:58.911: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.969133662s Jan 13 15:02:59.915: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.964309429s Jan 13 15:03:00.919: INFO: Verifying statefulset ss doesn't scale past 1 for another 960.289186ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6083 Jan 13 15:03:01.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 15:03:02.109: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 13 15:03:02.109: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 15:03:02.109: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 15:03:02.112: INFO: Found 1 stateful pods, waiting for 3 Jan 13 15:03:12.116: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 13 15:03:12.116: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 13 15:03:12.117: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Jan 13 15:03:12.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 15:03:12.294: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 15:03:12.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 15:03:12.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 15:03:12.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 15:03:12.489: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 15:03:12.489: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 15:03:12.489: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 15:03:12.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 13 15:03:12.689: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 13 15:03:12.689: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 13 15:03:12.689: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 13 15:03:12.689: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 15:03:12.693: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 13 15:03:22.700: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 13 15:03:22.700: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 13 15:03:22.700: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 13 15:03:22.710: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999496s Jan 13 15:03:23.714: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996857776s Jan 13 15:03:24.719: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992486428s Jan 13 15:03:25.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987610653s Jan 13 15:03:26.727: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.983557138s Jan 13 15:03:27.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.979539877s Jan 13 15:03:28.735: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.975476398s Jan 13 15:03:29.739: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.97129959s Jan 13 15:03:30.744: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.966951781s Jan 13 15:03:31.748: INFO: Verifying statefulset ss doesn't scale past 3 for another 962.703214ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6083 Jan 13 15:03:32.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 15:03:32.929: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 13 15:03:32.929: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 15:03:32.929: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 15:03:32.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 15:03:33.092: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 13 15:03:33.092: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 15:03:33.092: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 15:03:33.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-6083 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 13 15:03:33.270: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 13 15:03:33.270: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 13 15:03:33.270: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 13 15:03:33.270: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:114 Jan 13 15:03:53.285: INFO: Deleting all statefulset in ns statefulset-6083 Jan 13 15:03:53.289: INFO: Scaling statefulset ss to 0 Jan 13 15:03:53.307: INFO: Waiting for statefulset status.replicas updated to 0 Jan 13 15:03:53.312: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:03:53.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6083" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":20,"skipped":296,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:03:53.354: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 15:03:53.412: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9c933f15-c92b-4a2d-b9e4-aae770f35d3e" in namespace "projected-8890" to be "Succeeded or Failed" Jan 13 15:03:53.419: INFO: Pod "downwardapi-volume-9c933f15-c92b-4a2d-b9e4-aae770f35d3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.643883ms Jan 13 15:03:55.426: INFO: Pod "downwardapi-volume-9c933f15-c92b-4a2d-b9e4-aae770f35d3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013875309s �[1mSTEP�[0m: Saw pod success Jan 13 15:03:55.426: INFO: Pod "downwardapi-volume-9c933f15-c92b-4a2d-b9e4-aae770f35d3e" satisfied condition "Succeeded or Failed" Jan 13 15:03:55.431: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod downwardapi-volume-9c933f15-c92b-4a2d-b9e4-aae770f35d3e container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:03:55.454: INFO: Waiting for pod downwardapi-volume-9c933f15-c92b-4a2d-b9e4-aae770f35d3e to disappear Jan 13 15:03:55.459: INFO: Pod downwardapi-volume-9c933f15-c92b-4a2d-b9e4-aae770f35d3e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:03:55.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8890" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":297,"failed":3,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:00:06.118: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod test-webserver-d63a59c9-8b04-4b03-aeef-d6ff395ecfdd in namespace container-probe-3782 Jan 13 15:00:10.160: INFO: Started pod test-webserver-d63a59c9-8b04-4b03-aeef-d6ff395ecfdd in namespace container-probe-3782 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 13 15:00:10.164: INFO: Initial restart count of pod test-webserver-d63a59c9-8b04-4b03-aeef-d6ff395ecfdd is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:04:10.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3782" for this suite. �[32m• [SLOW TEST:244.638 seconds]�[0m [k8s.io] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":882,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 14:59:52.182: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-5559 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-5559 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-5559 I0113 14:59:52.240043 17 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-5559, replica count: 3 I0113 14:59:55.290576 17 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 14:59:55.295: INFO: Creating new exec pod Jan 13 14:59:58.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 13 14:59:58.515: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 13 14:59:58.515: INFO: stdout: "" Jan 13 14:59:58.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c nc -zv -t -w 2 10.136.76.29 80' Jan 13 14:59:58.712: INFO: stderr: "+ nc -zv -t -w 2 10.136.76.29 80\nConnection to 10.136.76.29 80 port [tcp/http] succeeded!\n" Jan 13 14:59:58.712: INFO: stdout: "" Jan 13 14:59:58.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.76.29:80/ ; done' Jan 13 15:00:13.044: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n" Jan 13 15:00:13.044: INFO: stdout: "\n\naffinity-clusterip-transition-r5dpg\naffinity-clusterip-transition-r5dpg\n\n\naffinity-clusterip-transition-r5dpg\naffinity-clusterip-transition-r5dpg\naffinity-clusterip-transition-r5dpg\naffinity-clusterip-transition-r5dpg\n\n\n\naffinity-clusterip-transition-r5dpg\n\naffinity-clusterip-transition-r5dpg\naffinity-clusterip-transition-wlldd" Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:00:13.044: INFO: Received response from host: affinity-clusterip-transition-wlldd Jan 13 15:00:43.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.76.29:80/ ; done' Jan 13 15:01:01.347: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.76.29:80/\n" Jan 13 15:01:01.347: INFO: stdout: "\n\naffinity-clusterip-transition-wlldd\n\naffinity-clusterip-transition-wlldd\n\naffinity-clusterip-transition-r5dpg\n\n\naffinity-clusterip-transition-wlldd\naffinity-clusterip-transition-r5dpg\n\naffinity-clusterip-transition-wlldd\n\n\n\naffinity-clusterip-transition-r5dpg" Jan 13 15:01:01.347: INFO: Received response from host: affinity-clusterip-transition-wlldd Jan 13 15:01:01.347: INFO: Received response from host: affinity-clusterip-transition-wlldd Jan 13 15:01:01.347: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:01:01.347: INFO: Received response from host: affinity-clusterip-transition-wlldd Jan 13 15:01:01.347: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:01:01.347: INFO: Received response from host: affinity-clusterip-transition-wlldd Jan 13 15:01:01.347: INFO: Received response from host: affinity-clusterip-transition-r5dpg Jan 13 15:01:01.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.76.29:80/ ; done' Jan 13 15:01:33.672: INFO: rc: 28 Jan 13 15:01:33.672: INFO: Failed to get response from 10.136.76.29:80. Retry until timeout Jan 13 15:02:03.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.76.29:80/ ; done' Jan 13 15:02:35.999: INFO: rc: 28 Jan 13 15:02:35.999: INFO: Failed to get response from 10.136.76.29:80. Retry until timeout Jan 13 15:03:03.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.76.29:80/ ; done' Jan 13 15:03:35.951: INFO: rc: 28 Jan 13 15:03:35.951: INFO: Failed to get response from 10.136.76.29:80. Retry until timeout Jan 13 15:03:35.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5559 exec execpod-affinity9gjxm -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.76.29:80/ ; done' Jan 13 15:04:08.290: INFO: rc: 28 Jan 13 15:04:08.290: INFO: Failed to get response from 10.136.76.29:80. Retry until timeout Jan 13 15:04:08.290: INFO: [] Jan 13 15:04:08.291: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc00337e2c0, 0xc0031f9400, 0xc000c52390, 0xc, 0x50, 0x1, 0xc0031f9401) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000c8eb00, 0x56112e0, 0xc00337e2c0, 0xc00103e280, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3459 +0x88c k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3399 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2437 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cf4c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000cf4c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000cf4c00, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 13 15:04:08.292: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-5559, will wait for the garbage collector to delete the pods Jan 13 15:04:08.394: INFO: Deleting ReplicationController affinity-clusterip-transition took: 15.88104ms Jan 13 15:04:08.495: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.488571ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:04:22.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5559" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [270.586 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 15:04:08.291: Connection timed out or not enough responses.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:04:10.888: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-7932, will wait for the garbage collector to delete the pods Jan 13 15:04:13.047: INFO: Deleting Job.batch foo took: 11.441642ms Jan 13 15:04:13.147: INFO: Terminating Job.batch foo pods took: 100.355186ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:04:46.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-7932" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":58,"skipped":915,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:04:46.709: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:04:48.830: INFO: Waiting up to 5m0s for pod "client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe" in namespace "pods-2066" to be "Succeeded or Failed" Jan 13 15:04:48.840: INFO: Pod "client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 9.626231ms Jan 13 15:04:50.847: INFO: Pod "client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016720182s Jan 13 15:04:52.853: INFO: Pod "client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023082312s �[1mSTEP�[0m: Saw pod success Jan 13 15:04:52.853: INFO: Pod "client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe" satisfied condition "Succeeded or Failed" Jan 13 15:04:52.859: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe container env3cont: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:04:52.880: INFO: Waiting for pod client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe to disappear Jan 13 15:04:52.888: INFO: Pod client-envvars-0a2f04f7-f097-4439-811c-125aafe97dbe no longer exists [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:04:52.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-2066" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":933,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:04:52.910: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating Pod �[1mSTEP�[0m: Reading file content from the nginx-container Jan 13 15:04:55.041: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8609 PodName:pod-sharedvolume-2cf5d319-4bbe-4989-9c4a-61aa2447f7df ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:04:55.041: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 15:04:55.181: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:04:55.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8609" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":60,"skipped":934,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:02:45.030: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 13 15:04:45.600: INFO: Successfully updated pod "var-expansion-abe2075b-aec1-4a46-a90b-cb62dd8b2c14" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 13 15:04:47.612: INFO: Deleting pod "var-expansion-abe2075b-aec1-4a46-a90b-cb62dd8b2c14" in namespace "var-expansion-6701" Jan 13 15:04:47.622: INFO: Wait up to 5m0s for pod "var-expansion-abe2075b-aec1-4a46-a90b-cb62dd8b2c14" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:05:19.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-6701" for this suite. �[32m• [SLOW TEST:154.628 seconds]�[0m [k8s.io] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:624�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][Slow] [Conformance]","total":-1,"completed":21,"skipped":488,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:05:19.699: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:05:19.764: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-7a837327-39d7-479e-b34b-32c90bd7b429" in namespace "security-context-test-2752" to be "Succeeded or Failed" Jan 13 15:05:19.768: INFO: Pod "busybox-privileged-false-7a837327-39d7-479e-b34b-32c90bd7b429": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063894ms Jan 13 15:05:21.774: INFO: Pod "busybox-privileged-false-7a837327-39d7-479e-b34b-32c90bd7b429": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009694622s Jan 13 15:05:21.774: INFO: Pod "busybox-privileged-false-7a837327-39d7-479e-b34b-32c90bd7b429" satisfied condition "Succeeded or Failed" Jan 13 15:05:21.791: INFO: Got logs for pod "busybox-privileged-false-7a837327-39d7-479e-b34b-32c90bd7b429": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:05:21.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-2752" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":504,"failed":3,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:04:55.286: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:04:56.091: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 15:04:58.115: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219096, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219096, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219096, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219096, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:05:01.137: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:05:01.143: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-7809-crds.webhook.example.com via the AdmissionRegistration API Jan 13 15:05:11.684: INFO: Waiting for webhook configuration to be ready... Jan 13 15:05:21.802: INFO: Waiting for webhook configuration to be ready... Jan 13 15:05:31.904: INFO: Waiting for webhook configuration to be ready... Jan 13 15:05:42.004: INFO: Waiting for webhook configuration to be ready... Jan 13 15:05:52.020: INFO: Waiting for webhook configuration to be ready... Jan 13 15:05:52.021: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002f61f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForCustomResource(0xc0001f54a0, 0xc003fe2bd0, 0xc, 0xc0038d9e00, 0xc0034a3860, 0x20fb, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1825 +0xc6a k8s.io/kubernetes/test/e2e/apimachinery.glob..func23.13() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:325 +0xc9 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc003602300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc003602300) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc003602300, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:05:52.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3710" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3710-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[91m�[1m• Failure [57.378 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate custom resource with different stored version [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 15:05:52.021: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002f61f0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1825 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":60,"skipped":973,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:05:52.668: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:05:53.875: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 15:05:55.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219153, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219153, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219153, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219153, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:05:58.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:05:58.931: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-3806-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:06:00.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2078" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2078-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":61,"skipped":973,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:06:00.450: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:06:00.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1876" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":62,"skipped":995,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:06:00.600: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Create set of pod templates Jan 13 15:06:00.653: INFO: created test-podtemplate-1 Jan 13 15:06:00.658: INFO: created test-podtemplate-2 Jan 13 15:06:00.664: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 13 15:06:00.669: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 13 15:06:00.686: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:06:00.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-5945" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":63,"skipped":1032,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:06:00.726: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 15:06:00.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a4c2babd-1f0d-4a88-bb7a-efa184ed541c" in namespace "downward-api-3819" to be "Succeeded or Failed" Jan 13 15:06:00.813: INFO: Pod "downwardapi-volume-a4c2babd-1f0d-4a88-bb7a-efa184ed541c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.414395ms Jan 13 15:06:02.818: INFO: Pod "downwardapi-volume-a4c2babd-1f0d-4a88-bb7a-efa184ed541c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015346784s �[1mSTEP�[0m: Saw pod success Jan 13 15:06:02.818: INFO: Pod "downwardapi-volume-a4c2babd-1f0d-4a88-bb7a-efa184ed541c" satisfied condition "Succeeded or Failed" Jan 13 15:06:02.822: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr pod downwardapi-volume-a4c2babd-1f0d-4a88-bb7a-efa184ed541c container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:06:02.851: INFO: Waiting for pod downwardapi-volume-a4c2babd-1f0d-4a88-bb7a-efa184ed541c to disappear Jan 13 15:06:02.855: INFO: Pod downwardapi-volume-a4c2babd-1f0d-4a88-bb7a-efa184ed541c no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:06:02.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3819" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1040,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:05:21.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 13 15:05:22.365: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:05:22.395: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:05:25.428: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Listing all of the created validation webhooks Jan 13 15:05:35.535: INFO: Waiting for webhook configuration to be ready... Jan 13 15:05:45.660: INFO: Waiting for webhook configuration to be ready... Jan 13 15:05:55.766: INFO: Waiting for webhook configuration to be ready... Jan 13 15:06:05.863: INFO: Waiting for webhook configuration to be ready... Jan 13 15:06:15.889: INFO: Waiting for webhook configuration to be ready... Jan 13 15:06:15.889: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func23.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:605 +0x7ec k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0022ab200, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:06:15.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8414" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8414-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[91m�[1m• Failure [54.186 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mlisting validating webhooks should work [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 15:06:15.889: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:605 �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:06:03.048: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:299 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a replication controller Jan 13 15:06:03.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 create -f -' Jan 13 15:06:04.821: INFO: stderr: "" Jan 13 15:06:04.822: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 13 15:06:04.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 15:06:05.049: INFO: stderr: "" Jan 13 15:06:05.049: INFO: stdout: "update-demo-nautilus-vqr8x update-demo-nautilus-zk4ms " Jan 13 15:06:05.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-vqr8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 15:06:05.224: INFO: stderr: "" Jan 13 15:06:05.224: INFO: stdout: "" Jan 13 15:06:05.224: INFO: update-demo-nautilus-vqr8x is created but not running Jan 13 15:06:10.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 15:06:10.374: INFO: stderr: "" Jan 13 15:06:10.374: INFO: stdout: "update-demo-nautilus-vqr8x update-demo-nautilus-zk4ms " Jan 13 15:06:10.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-vqr8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 15:06:10.528: INFO: stderr: "" Jan 13 15:06:10.528: INFO: stdout: "true" Jan 13 15:06:10.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-vqr8x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 15:06:10.691: INFO: stderr: "" Jan 13 15:06:10.691: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 15:06:10.691: INFO: validating pod update-demo-nautilus-vqr8x Jan 13 15:06:10.700: INFO: got data: { "image": "nautilus.jpg" } Jan 13 15:06:10.700: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 15:06:10.700: INFO: update-demo-nautilus-vqr8x is verified up and running Jan 13 15:06:10.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-zk4ms -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 15:06:10.867: INFO: stderr: "" Jan 13 15:06:10.867: INFO: stdout: "true" Jan 13 15:06:10.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-zk4ms -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 15:06:11.030: INFO: stderr: "" Jan 13 15:06:11.030: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 15:06:11.030: INFO: validating pod update-demo-nautilus-zk4ms Jan 13 15:06:11.038: INFO: got data: { "image": "nautilus.jpg" } Jan 13 15:06:11.038: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 15:06:11.038: INFO: update-demo-nautilus-zk4ms is verified up and running �[1mSTEP�[0m: scaling down the replication controller Jan 13 15:06:11.044: INFO: scanned /root for discovery docs: <nil> Jan 13 15:06:11.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 13 15:06:12.301: INFO: stderr: "" Jan 13 15:06:12.301: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 13 15:06:12.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 15:06:12.493: INFO: stderr: "" Jan 13 15:06:12.493: INFO: stdout: "update-demo-nautilus-vqr8x update-demo-nautilus-zk4ms " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 13 15:06:17.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 15:06:17.705: INFO: stderr: "" Jan 13 15:06:17.705: INFO: stdout: "update-demo-nautilus-vqr8x update-demo-nautilus-zk4ms " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 13 15:06:22.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 15:06:22.894: INFO: stderr: "" Jan 13 15:06:22.894: INFO: stdout: "update-demo-nautilus-vqr8x " Jan 13 15:06:22.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-vqr8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 15:06:23.085: INFO: stderr: "" Jan 13 15:06:23.085: INFO: stdout: "true" Jan 13 15:06:23.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-vqr8x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 15:06:23.260: INFO: stderr: "" Jan 13 15:06:23.261: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 15:06:23.261: INFO: validating pod update-demo-nautilus-vqr8x Jan 13 15:06:23.267: INFO: got data: { "image": "nautilus.jpg" } Jan 13 15:06:23.267: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 15:06:23.267: INFO: update-demo-nautilus-vqr8x is verified up and running �[1mSTEP�[0m: scaling up the replication controller Jan 13 15:06:23.271: INFO: scanned /root for discovery docs: <nil> Jan 13 15:06:23.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 13 15:06:24.521: INFO: stderr: "" Jan 13 15:06:24.521: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 13 15:06:24.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 13 15:06:24.716: INFO: stderr: "" Jan 13 15:06:24.716: INFO: stdout: "update-demo-nautilus-76hqb update-demo-nautilus-vqr8x " Jan 13 15:06:24.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-76hqb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 15:06:24.904: INFO: stderr: "" Jan 13 15:06:24.904: INFO: stdout: "true" Jan 13 15:06:24.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-76hqb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 15:06:25.109: INFO: stderr: "" Jan 13 15:06:25.109: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 15:06:25.109: INFO: validating pod update-demo-nautilus-76hqb Jan 13 15:06:25.116: INFO: got data: { "image": "nautilus.jpg" } Jan 13 15:06:25.116: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 15:06:25.116: INFO: update-demo-nautilus-76hqb is verified up and running Jan 13 15:06:25.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-vqr8x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 13 15:06:25.315: INFO: stderr: "" Jan 13 15:06:25.315: INFO: stdout: "true" Jan 13 15:06:25.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods update-demo-nautilus-vqr8x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 13 15:06:25.478: INFO: stderr: "" Jan 13 15:06:25.478: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 13 15:06:25.478: INFO: validating pod update-demo-nautilus-vqr8x Jan 13 15:06:25.484: INFO: got data: { "image": "nautilus.jpg" } Jan 13 15:06:25.484: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 13 15:06:25.485: INFO: update-demo-nautilus-vqr8x is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 13 15:06:25.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 delete --grace-period=0 --force -f -' Jan 13 15:06:25.677: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 13 15:06:25.677: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 13 15:06:25.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get rc,svc -l name=update-demo --no-headers' Jan 13 15:06:25.876: INFO: stderr: "No resources found in kubectl-658 namespace.\n" Jan 13 15:06:25.876: INFO: stdout: "" Jan 13 15:06:25.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 15:06:26.066: INFO: stderr: "" Jan 13 15:06:26.066: INFO: stdout: "update-demo-nautilus-76hqb\nupdate-demo-nautilus-vqr8x\n" Jan 13 15:06:26.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get rc,svc -l name=update-demo --no-headers' Jan 13 15:06:26.778: INFO: stderr: "No resources found in kubectl-658 namespace.\n" Jan 13 15:06:26.778: INFO: stdout: "" Jan 13 15:06:26.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-658 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 13 15:06:27.007: INFO: stderr: "" Jan 13 15:06:27.008: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:06:27.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-658" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":65,"skipped":1127,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":22,"skipped":508,"failed":4,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:06:16.007: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:06:17.533: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 15:06:19.553: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219177, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219177, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219177, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219177, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:06:22.594: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Listing all of the created validation webhooks Jan 13 15:06:32.710: INFO: Waiting for webhook configuration to be ready... Jan 13 15:06:42.839: INFO: Waiting for webhook configuration to be ready... Jan 13 15:06:52.945: INFO: Waiting for webhook configuration to be ready... Jan 13 15:07:03.037: INFO: Waiting for webhook configuration to be ready... Jan 13 15:07:13.064: INFO: Waiting for webhook configuration to be ready... Jan 13 15:07:13.064: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func23.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:605 +0x7ec k8s.io/kubernetes/test/e2e.RunE2ETests(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc0022ab200) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc0022ab200, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:07:13.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3609" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3609-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[91m�[1m• Failure [57.178 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mlisting validating webhooks should work [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 15:07:13.065: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0001fa200>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:605 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":22,"skipped":508,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:07:13.189: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:07:13.673: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:07:16.712: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:07:17.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7675" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7675-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":23,"skipped":508,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:07:17.112: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-8ca70dfa-7ae6-4930-ba89-1bd560179757 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 13 15:07:17.219: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d8f9cd23-8b12-4884-a70b-f63997d3bdcd" in namespace "projected-9674" to be "Succeeded or Failed" Jan 13 15:07:17.226: INFO: Pod "pod-projected-configmaps-d8f9cd23-8b12-4884-a70b-f63997d3bdcd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.557778ms Jan 13 15:07:19.232: INFO: Pod "pod-projected-configmaps-d8f9cd23-8b12-4884-a70b-f63997d3bdcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013072693s �[1mSTEP�[0m: Saw pod success Jan 13 15:07:19.232: INFO: Pod "pod-projected-configmaps-d8f9cd23-8b12-4884-a70b-f63997d3bdcd" satisfied condition "Succeeded or Failed" Jan 13 15:07:19.240: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-ceauut pod pod-projected-configmaps-d8f9cd23-8b12-4884-a70b-f63997d3bdcd container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:07:19.300: INFO: Waiting for pod pod-projected-configmaps-d8f9cd23-8b12-4884-a70b-f63997d3bdcd to disappear Jan 13 15:07:19.310: INFO: Pod pod-projected-configmaps-d8f9cd23-8b12-4884-a70b-f63997d3bdcd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:07:19.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9674" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":509,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:07:19.343: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:07:19.444: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7d5f79b8-1854-42d1-996b-1d51a6530d63", Controller:(*bool)(0xc0031b1a3a), BlockOwnerDeletion:(*bool)(0xc0031b1a3b)}} Jan 13 15:07:19.456: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"183683d2-1b83-4423-85fb-cb656653cc97", Controller:(*bool)(0xc00188be3e), BlockOwnerDeletion:(*bool)(0xc00188be3f)}} Jan 13 15:07:19.466: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"dfdf2b52-81a9-4d83-9181-dce5c61ef71b", Controller:(*bool)(0xc003054f56), BlockOwnerDeletion:(*bool)(0xc003054f57)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:07:24.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3884" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":25,"skipped":513,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:07:24.503: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 13 15:07:27.100: INFO: Successfully updated pod "pod-update-cb1aa100-8761-4c06-8a02-955bad556176" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Jan 13 15:07:27.113: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:07:27.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8579" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":514,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:07:27.234: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:07:28.078: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 15:07:30.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219248, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219248, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219248, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219248, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:07:33.123: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Jan 13 15:07:35.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-2677 attach --namespace=webhook-2677 to-be-attached-pod -i -c=container1' Jan 13 15:07:35.381: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:07:35.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2677" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2677-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":27,"skipped":557,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:07:35.547: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:07:36.470: INFO: Checking APIGroup: apiregistration.k8s.io Jan 13 15:07:36.473: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Jan 13 15:07:36.473: INFO: Versions found [{apiregistration.k8s.io/v1 v1} {apiregistration.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.473: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Jan 13 15:07:36.473: INFO: Checking APIGroup: apps Jan 13 15:07:36.475: INFO: PreferredVersion.GroupVersion: apps/v1 Jan 13 15:07:36.475: INFO: Versions found [{apps/v1 v1}] Jan 13 15:07:36.475: INFO: apps/v1 matches apps/v1 Jan 13 15:07:36.475: INFO: Checking APIGroup: events.k8s.io Jan 13 15:07:36.476: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Jan 13 15:07:36.476: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.476: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Jan 13 15:07:36.476: INFO: Checking APIGroup: authentication.k8s.io Jan 13 15:07:36.478: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Jan 13 15:07:36.478: INFO: Versions found [{authentication.k8s.io/v1 v1} {authentication.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.478: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Jan 13 15:07:36.478: INFO: Checking APIGroup: authorization.k8s.io Jan 13 15:07:36.479: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Jan 13 15:07:36.479: INFO: Versions found [{authorization.k8s.io/v1 v1} {authorization.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.479: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Jan 13 15:07:36.479: INFO: Checking APIGroup: autoscaling Jan 13 15:07:36.480: INFO: PreferredVersion.GroupVersion: autoscaling/v1 Jan 13 15:07:36.481: INFO: Versions found [{autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Jan 13 15:07:36.481: INFO: autoscaling/v1 matches autoscaling/v1 Jan 13 15:07:36.481: INFO: Checking APIGroup: batch Jan 13 15:07:36.483: INFO: PreferredVersion.GroupVersion: batch/v1 Jan 13 15:07:36.483: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Jan 13 15:07:36.483: INFO: batch/v1 matches batch/v1 Jan 13 15:07:36.483: INFO: Checking APIGroup: certificates.k8s.io Jan 13 15:07:36.485: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Jan 13 15:07:36.485: INFO: Versions found [{certificates.k8s.io/v1 v1} {certificates.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.485: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Jan 13 15:07:36.485: INFO: Checking APIGroup: networking.k8s.io Jan 13 15:07:36.486: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Jan 13 15:07:36.486: INFO: Versions found [{networking.k8s.io/v1 v1} {networking.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.486: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Jan 13 15:07:36.486: INFO: Checking APIGroup: extensions Jan 13 15:07:36.488: INFO: PreferredVersion.GroupVersion: extensions/v1beta1 Jan 13 15:07:36.488: INFO: Versions found [{extensions/v1beta1 v1beta1}] Jan 13 15:07:36.488: INFO: extensions/v1beta1 matches extensions/v1beta1 Jan 13 15:07:36.488: INFO: Checking APIGroup: policy Jan 13 15:07:36.490: INFO: PreferredVersion.GroupVersion: policy/v1beta1 Jan 13 15:07:36.490: INFO: Versions found [{policy/v1beta1 v1beta1}] Jan 13 15:07:36.490: INFO: policy/v1beta1 matches policy/v1beta1 Jan 13 15:07:36.490: INFO: Checking APIGroup: rbac.authorization.k8s.io Jan 13 15:07:36.492: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Jan 13 15:07:36.492: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1} {rbac.authorization.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.492: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Jan 13 15:07:36.492: INFO: Checking APIGroup: storage.k8s.io Jan 13 15:07:36.494: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Jan 13 15:07:36.494: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.494: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Jan 13 15:07:36.494: INFO: Checking APIGroup: admissionregistration.k8s.io Jan 13 15:07:36.495: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Jan 13 15:07:36.495: INFO: Versions found [{admissionregistration.k8s.io/v1 v1} {admissionregistration.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.495: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Jan 13 15:07:36.495: INFO: Checking APIGroup: apiextensions.k8s.io Jan 13 15:07:36.496: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Jan 13 15:07:36.496: INFO: Versions found [{apiextensions.k8s.io/v1 v1} {apiextensions.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.496: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Jan 13 15:07:36.496: INFO: Checking APIGroup: scheduling.k8s.io Jan 13 15:07:36.498: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Jan 13 15:07:36.498: INFO: Versions found [{scheduling.k8s.io/v1 v1} {scheduling.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.498: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Jan 13 15:07:36.498: INFO: Checking APIGroup: coordination.k8s.io Jan 13 15:07:36.500: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Jan 13 15:07:36.500: INFO: Versions found [{coordination.k8s.io/v1 v1} {coordination.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.500: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Jan 13 15:07:36.500: INFO: Checking APIGroup: node.k8s.io Jan 13 15:07:36.501: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Jan 13 15:07:36.501: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.501: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Jan 13 15:07:36.502: INFO: Checking APIGroup: discovery.k8s.io Jan 13 15:07:36.503: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1beta1 Jan 13 15:07:36.503: INFO: Versions found [{discovery.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.503: INFO: discovery.k8s.io/v1beta1 matches discovery.k8s.io/v1beta1 Jan 13 15:07:36.503: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Jan 13 15:07:36.504: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta1 Jan 13 15:07:36.504: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Jan 13 15:07:36.504: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:07:36.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-9740" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":28,"skipped":563,"failed":5,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:03:55.557: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-2868 �[1mSTEP�[0m: creating service affinity-nodeport in namespace services-2868 �[1mSTEP�[0m: creating replication controller affinity-nodeport in namespace services-2868 I0113 15:03:55.655344 14 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-2868, replica count: 3 I0113 15:03:58.706290 14 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 15:03:58.720: INFO: Creating new exec pod Jan 13 15:04:01.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jan 13 15:04:02.049: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jan 13 15:04:02.049: INFO: stdout: "" Jan 13 15:04:02.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c nc -zv -t -w 2 10.132.196.5 80' Jan 13 15:04:02.350: INFO: stderr: "+ nc -zv -t -w 2 10.132.196.5 80\nConnection to 10.132.196.5 80 port [tcp/http] succeeded!\n" Jan 13 15:04:02.350: INFO: stdout: "" Jan 13 15:04:02.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31013' Jan 13 15:04:02.680: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 31013\nConnection to 172.18.0.5 31013 port [tcp/31013] succeeded!\n" Jan 13 15:04:02.680: INFO: stdout: "" Jan 13 15:04:02.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 31013' Jan 13 15:04:03.010: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 31013\nConnection to 172.18.0.4 31013 port [tcp/31013] succeeded!\n" Jan 13 15:04:03.010: INFO: stdout: "" Jan 13 15:04:03.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31013/ ; done' Jan 13 15:04:53.331: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31013/\n" Jan 13 15:04:53.331: INFO: stdout: "\n" Jan 13 15:05:23.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31013/ ; done' Jan 13 15:06:13.682: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31013/\n" Jan 13 15:06:13.682: INFO: stdout: "\n" Jan 13 15:06:23.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31013/ ; done' Jan 13 15:07:13.712: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31013/\n" Jan 13 15:07:13.712: INFO: stdout: "\n" Jan 13 15:07:13.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2868 exec execpod-affinity9p2x6 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31013/ ; done' Jan 13 15:08:04.113: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31013/\n" Jan 13 15:08:04.113: INFO: stdout: "\n" Jan 13 15:08:04.113: INFO: [] Jan 13 15:08:04.114: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc00186adc0, 0xc001c20000, 0xc00036b980, 0xa, 0x7925, 0x1, 0xc001c20000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000e914a0, 0x56112e0, 0xc00186adc0, 0xc000c82000, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3447 +0x92c k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3403 k8s.io/kubernetes/test/e2e/network.glob..func24.28() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2452 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000c36180) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000c36180, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 13 15:08:04.115: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport in namespace services-2868, will wait for the garbage collector to delete the pods Jan 13 15:08:04.194: INFO: Deleting ReplicationController affinity-nodeport took: 9.642127ms Jan 13 15:08:04.695: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.874442ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:12.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2868" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [257.333 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould have session affinity work for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 15:08:04.114: Connection timed out or not enough responses.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":21,"skipped":330,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:12.899: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-5165 �[1mSTEP�[0m: creating service affinity-nodeport in namespace services-5165 �[1mSTEP�[0m: creating replication controller affinity-nodeport in namespace services-5165 I0113 15:08:13.017377 14 runners.go:190] Created replication controller with name: affinity-nodeport, namespace: services-5165, replica count: 3 I0113 15:08:16.068145 14 runners.go:190] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 15:08:16.085: INFO: Creating new exec pod Jan 13 15:08:19.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5165 exec execpod-affinityrspqr -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport 80' Jan 13 15:08:19.460: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Jan 13 15:08:19.460: INFO: stdout: "" Jan 13 15:08:19.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5165 exec execpod-affinityrspqr -- /bin/sh -x -c nc -zv -t -w 2 10.141.159.18 80' Jan 13 15:08:19.827: INFO: stderr: "+ nc -zv -t -w 2 10.141.159.18 80\nConnection to 10.141.159.18 80 port [tcp/http] succeeded!\n" Jan 13 15:08:19.827: INFO: stdout: "" Jan 13 15:08:19.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5165 exec execpod-affinityrspqr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.4 30432' Jan 13 15:08:20.144: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.4 30432\nConnection to 172.18.0.4 30432 port [tcp/30432] succeeded!\n" Jan 13 15:08:20.144: INFO: stdout: "" Jan 13 15:08:20.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5165 exec execpod-affinityrspqr -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.7 30432' Jan 13 15:08:20.499: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.7 30432\nConnection to 172.18.0.7 30432 port [tcp/30432] succeeded!\n" Jan 13 15:08:20.499: INFO: stdout: "" Jan 13 15:08:20.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5165 exec execpod-affinityrspqr -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:30432/ ; done' Jan 13 15:08:21.050: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:30432/\n" Jan 13 15:08:21.050: INFO: stdout: "\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz\naffinity-nodeport-jd2kz" Jan 13 15:08:21.050: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Received response from host: affinity-nodeport-jd2kz Jan 13 15:08:21.051: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport in namespace services-5165, will wait for the garbage collector to delete the pods Jan 13 15:08:21.140: INFO: Deleting ReplicationController affinity-nodeport took: 11.707715ms Jan 13 15:08:21.640: INFO: Terminating ReplicationController affinity-nodeport pods took: 500.393121ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:32.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5165" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":22,"skipped":330,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:06:27.046: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-4409 Jan 13 15:06:29.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 13 15:06:29.482: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 13 15:06:29.482: INFO: stdout: "iptables" Jan 13 15:06:29.482: INFO: proxyMode: iptables Jan 13 15:06:29.502: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 13 15:06:29.507: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-4409 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-4409 I0113 15:06:29.546393 16 runners.go:190] Created replication controller with name: affinity-nodeport-timeout, namespace: services-4409, replica count: 3 I0113 15:06:32.597264 16 runners.go:190] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 15:06:32.616: INFO: Creating new exec pod Jan 13 15:06:35.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c nc -zv -t -w 2 affinity-nodeport-timeout 80' Jan 13 15:06:35.977: INFO: stderr: "+ nc -zv -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jan 13 15:06:35.977: INFO: stdout: "" Jan 13 15:06:35.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c nc -zv -t -w 2 10.128.21.34 80' Jan 13 15:06:36.290: INFO: stderr: "+ nc -zv -t -w 2 10.128.21.34 80\nConnection to 10.128.21.34 80 port [tcp/http] succeeded!\n" Jan 13 15:06:36.290: INFO: stdout: "" Jan 13 15:06:36.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.5 31800' Jan 13 15:06:36.627: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.5 31800\nConnection to 172.18.0.5 31800 port [tcp/31800] succeeded!\n" Jan 13 15:06:36.628: INFO: stdout: "" Jan 13 15:06:36.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.6 31800' Jan 13 15:06:36.923: INFO: stderr: "+ nc -zv -t -w 2 172.18.0.6 31800\nConnection to 172.18.0.6 31800 port [tcp/31800] succeeded!\n" Jan 13 15:06:36.923: INFO: stdout: "" Jan 13 15:06:36.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31800/ ; done' Jan 13 15:06:37.389: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n" Jan 13 15:06:37.389: INFO: stdout: "\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr\naffinity-nodeport-timeout-28vrr" Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.389: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Received response from host: affinity-nodeport-timeout-28vrr Jan 13 15:06:37.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31800/' Jan 13 15:06:37.728: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n" Jan 13 15:06:37.728: INFO: stdout: "affinity-nodeport-timeout-28vrr" Jan 13 15:06:57.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31800/' Jan 13 15:06:58.084: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n" Jan 13 15:06:58.084: INFO: stdout: "affinity-nodeport-timeout-28vrr" Jan 13 15:07:18.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31800/' Jan 13 15:07:18.634: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n" Jan 13 15:07:18.634: INFO: stdout: "affinity-nodeport-timeout-28vrr" Jan 13 15:07:38.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4409 exec execpod-affinitywzxfc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.7:31800/' Jan 13 15:08:28.953: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.7:31800/\n" Jan 13 15:08:28.953: INFO: stdout: "" Jan 13 15:08:28.953: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-4409, will wait for the garbage collector to delete the pods Jan 13 15:08:29.043: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 12.509043ms Jan 13 15:08:29.143: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.221147ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:42.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-4409" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[32m• [SLOW TEST:135.220 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":66,"skipped":1135,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:32.844: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: running the image docker.io/library/httpd:2.4.38-alpine Jan 13 15:08:32.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7062 run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod' Jan 13 15:08:33.132: INFO: stderr: "" Jan 13 15:08:33.132: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Jan 13 15:08:33.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7062 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "docker.io/library/busybox:1.29"}]}} --dry-run=server' Jan 13 15:08:34.274: INFO: stderr: "" Jan 13 15:08:34.274: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/httpd:2.4.38-alpine Jan 13 15:08:34.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7062 delete pods e2e-test-httpd-pod' Jan 13 15:08:42.700: INFO: stderr: "" Jan 13 15:08:42.700: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:42.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7062" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":23,"skipped":372,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":37,"skipped":807,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:04:22.791: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:745 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating service in namespace services-342 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-342 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-342 I0113 15:04:22.888318 17 runners.go:190] Created replication controller with name: affinity-clusterip-transition, namespace: services-342, replica count: 3 I0113 15:04:25.938838 17 runners.go:190] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 15:04:25.947: INFO: Creating new exec pod Jan 13 15:04:28.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-342 exec execpod-affinitysh4tc -- /bin/sh -x -c nc -zv -t -w 2 affinity-clusterip-transition 80' Jan 13 15:04:29.325: INFO: stderr: "+ nc -zv -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 13 15:04:29.325: INFO: stdout: "" Jan 13 15:04:29.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-342 exec execpod-affinitysh4tc -- /bin/sh -x -c nc -zv -t -w 2 10.142.37.199 80' Jan 13 15:04:29.622: INFO: stderr: "+ nc -zv -t -w 2 10.142.37.199 80\nConnection to 10.142.37.199 80 port [tcp/http] succeeded!\n" Jan 13 15:04:29.622: INFO: stdout: "" Jan 13 15:04:29.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-342 exec execpod-affinitysh4tc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.142.37.199:80/ ; done' Jan 13 15:05:20.028: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n" Jan 13 15:05:20.028: INFO: stdout: "\naffinity-clusterip-transition-5hjw4\n" Jan 13 15:05:20.028: INFO: Received response from host: affinity-clusterip-transition-5hjw4 Jan 13 15:05:50.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-342 exec execpod-affinitysh4tc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.142.37.199:80/ ; done' Jan 13 15:06:40.323: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n" Jan 13 15:06:40.323: INFO: stdout: "\n" Jan 13 15:06:50.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-342 exec execpod-affinitysh4tc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.142.37.199:80/ ; done' Jan 13 15:07:40.389: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n" Jan 13 15:07:40.389: INFO: stdout: "\naffinity-clusterip-transition-xfk7j\naffinity-clusterip-transition-5hjw4\naffinity-clusterip-transition-xfk7j\naffinity-clusterip-transition-5hjw4\naffinity-clusterip-transition-xfk7j\n" Jan 13 15:07:40.389: INFO: Received response from host: affinity-clusterip-transition-xfk7j Jan 13 15:07:40.389: INFO: Received response from host: affinity-clusterip-transition-5hjw4 Jan 13 15:07:40.389: INFO: Received response from host: affinity-clusterip-transition-xfk7j Jan 13 15:07:40.389: INFO: Received response from host: affinity-clusterip-transition-5hjw4 Jan 13 15:07:40.389: INFO: Received response from host: affinity-clusterip-transition-xfk7j Jan 13 15:07:40.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-342 exec execpod-affinitysh4tc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.142.37.199:80/ ; done' Jan 13 15:08:30.780: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.37.199:80/\n" Jan 13 15:08:30.780: INFO: stdout: "\n" Jan 13 15:08:30.780: INFO: [affinity-clusterip-transition-5hjw4 affinity-clusterip-transition-xfk7j affinity-clusterip-transition-5hjw4 affinity-clusterip-transition-xfk7j affinity-clusterip-transition-5hjw4 affinity-clusterip-transition-xfk7j] Jan 13 15:08:30.780: FAIL: Connection timed out or not enough responses. Full Stack Trace k8s.io/kubernetes/test/e2e/network.checkAffinity(0x56112e0, 0xc0020e6580, 0xc000b9f800, 0xc0001aa6c0, 0xd, 0x50, 0x0, 0xc000b9f800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 +0x2db k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0xc000c8eb00, 0x56112e0, 0xc0020e6580, 0xc00103f180, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3454 +0x79b k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3399 k8s.io/kubernetes/test/e2e/network.glob..func24.27() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2437 +0xa5 k8s.io/kubernetes/test/e2e.RunE2ETests(0xc000cf4c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:130 +0x36c k8s.io/kubernetes/test/e2e.TestE2E(0xc000cf4c00) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:144 +0x2b testing.tRunner(0xc000cf4c00, 0x4fc9940) /usr/local/go/src/testing/testing.go:1123 +0xef created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1168 +0x2b3 Jan 13 15:08:30.782: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-342, will wait for the garbage collector to delete the pods Jan 13 15:08:30.885: INFO: Deleting ReplicationController affinity-clusterip-transition took: 18.291243ms Jan 13 15:08:30.986: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.881312ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:42.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-342" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 �[91m�[1m• Failure [260.053 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629�[0m �[91mJan 13 15:08:30.781: Connection timed out or not enough responses.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:202 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:42.775: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing all events in all namespaces �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: fetching the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing all events in all namespaces [AfterEach] [sig-api-machinery] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:42.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-9090" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":24,"skipped":380,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:42.975: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename tables �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:43.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "tables-7493" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":25,"skipped":388,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:42.396: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:187 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Jan 13 15:08:42.508: INFO: observed Pod pod-test in namespace pods-3015 in phase Pending conditions [] Jan 13 15:08:42.519: INFO: observed Pod pod-test in namespace pods-3015 in phase Pending conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 15:08:42 +0000 UTC }] Jan 13 15:08:42.538: INFO: observed Pod pod-test in namespace pods-3015 in phase Pending conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 15:08:42 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 15:08:42 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-13 15:08:42 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-13 15:08:42 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Jan 13 15:08:44.600: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: getting the PodStatus �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Jan 13 15:08:44.663: INFO: observed event type ADDED Jan 13 15:08:44.663: INFO: observed event type MODIFIED Jan 13 15:08:44.663: INFO: observed event type MODIFIED Jan 13 15:08:44.663: INFO: observed event type MODIFIED Jan 13 15:08:44.664: INFO: observed event type MODIFIED Jan 13 15:08:44.664: INFO: observed event type MODIFIED Jan 13 15:08:44.664: INFO: observed event type MODIFIED [AfterEach] [k8s.io] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:44.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3015" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:43.103: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:85 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Jan 13 15:08:43.181: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:43.181: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:43.195: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:43.195: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:43.220: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:43.220: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:43.267: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:43.267: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 13 15:08:45.334: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 13 15:08:45.334: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 13 15:08:45.588: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Jan 13 15:08:45.612: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Jan 13 15:08:45.616: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 0 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.617: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.631: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.631: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.676: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.676: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.701: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.701: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 2 Jan 13 15:08:45.724: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Jan 13 15:08:45.740: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Jan 13 15:08:45.758: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Jan 13 15:08:45.773: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 15:08:45.799: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 15:08:45.833: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 15:08:45.915: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 15:08:46.016: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 13 15:08:46.069: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Jan 13 15:08:48.373: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 Jan 13 15:08:48.373: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 Jan 13 15:08:48.373: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 Jan 13 15:08:48.373: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 Jan 13 15:08:48.373: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 Jan 13 15:08:48.373: INFO: observed Deployment test-deployment in namespace deployment-6373 with ReadyReplicas 1 �[1mSTEP�[0m: deleting the Deployment Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED Jan 13 15:08:48.411: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:79 Jan 13 15:08:48.445: INFO: Log out all the ReplicaSets if there is no deployment created Jan 13 15:08:48.450: INFO: ReplicaSet "test-deployment-768947d6f5": &ReplicaSet{ObjectMeta:{test-deployment-768947d6f5 deployment-6373 4ec04a34-09da-4b00-9a31-02cdee474a01 14451 3 2023-01-13 15:08:45 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 58d0d71d-398b-4340-bd06-f31296f0ba19 0xc002cefd07 0xc002cefd08}] [] [{kube-controller-manager Update apps/v1 2023-01-13 15:08:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d0d71d-398b-4340-bd06-f31296f0ba19\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 768947d6f5,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cefd70 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:3,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 15:08:48.459: INFO: pod: "test-deployment-768947d6f5-4pkxw": &Pod{ObjectMeta:{test-deployment-768947d6f5-4pkxw test-deployment-768947d6f5- deployment-6373 80c41c39-adf2-4af2-8c7b-6b0e07b31395 14427 0 2023-01-13 15:08:45 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 4ec04a34-09da-4b00-9a31-02cdee474a01 0xc002ac4ff7 0xc002ac4ff8}] [] [{kube-controller-manager Update v1 2023-01-13 15:08:45 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec04a34-09da-4b00-9a31-02cdee474a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-13 15:08:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.123\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nbv72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nbv72,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nbv72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-qbjsr,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.123,StartTime:2023-01-13 15:08:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-13 15:08:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://31e8eefcbcf47b95006ecd6c185af545fe6a88a457fe5e636263bc7b4a9ac151,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.123,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 15:08:48.459: INFO: pod: "test-deployment-768947d6f5-4qtv9": &Pod{ObjectMeta:{test-deployment-768947d6f5-4qtv9 test-deployment-768947d6f5- deployment-6373 11fd6e4a-8bae-4e37-9df5-20de98fdfb5b 14455 0 2023-01-13 15:08:48 +0000 UTC <nil> <nil> map[pod-template-hash:768947d6f5 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-768947d6f5 4ec04a34-09da-4b00-9a31-02cdee474a01 0xc002ac5297 0xc002ac5298}] [] [{kube-controller-manager Update v1 2023-01-13 15:08:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4ec04a34-09da-4b00-9a31-02cdee474a01\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-13 15:08:48 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nbv72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nbv72,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nbv72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:48 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [test-deployment],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-13 15:08:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 13 15:08:48.459: INFO: ReplicaSet "test-deployment-7c65d4bcf9": &ReplicaSet{ObjectMeta:{test-deployment-7c65d4bcf9 deployment-6373 3421f22c-3502-48ca-bb2d-6e92bfe062e5 14443 4 2023-01-13 15:08:45 +0000 UTC <nil> <nil> map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 58d0d71d-398b-4340-bd06-f31296f0ba19 0xc002cefdd7 0xc002cefdd8}] [] [{kube-controller-manager Update apps/v1 2023-01-13 15:08:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d0d71d-398b-4340-bd06-f31296f0ba19\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 7c65d4bcf9,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:7c65d4bcf9 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.2 [/bin/sleep 100000] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002cefe58 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 13 15:08:48.471: INFO: ReplicaSet "test-deployment-8b6954bfb": &ReplicaSet{ObjectMeta:{test-deployment-8b6954bfb deployment-6373 f05c4cb2-9a20-47e3-a5d2-5064de30f0cc 14303 2 2023-01-13 15:08:43 +0000 UTC <nil> <nil> map[pod-template-hash:8b6954bfb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 58d0d71d-398b-4340-bd06-f31296f0ba19 0xc002cefeb7 0xc002cefeb8}] [] [{kube-controller-manager Update apps/v1 2023-01-13 15:08:45 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"58d0d71d-398b-4340-bd06-f31296f0ba19\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 8b6954bfb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.21 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ceff20 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 13 15:08:48.487: INFO: pod: "test-deployment-8b6954bfb-r2xzq": &Pod{ObjectMeta:{test-deployment-8b6954bfb-r2xzq test-deployment-8b6954bfb- deployment-6373 d0c2ed10-9d65-4743-b5a4-7f6c20bbfd8f 14270 0 2023-01-13 15:08:43 +0000 UTC <nil> <nil> map[pod-template-hash:8b6954bfb test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-8b6954bfb f05c4cb2-9a20-47e3-a5d2-5064de30f0cc 0xc002c2eef7 0xc002c2eef8}] [] [{kube-controller-manager Update v1 2023-01-13 15:08:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f05c4cb2-9a20-47e3-a5d2-5064de30f0cc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2023-01-13 15:08:45 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.56\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nbv72,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nbv72,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nbv72,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-4w1i3t-worker-ceauut,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-13 15:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.56,StartTime:2023-01-13 15:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-13 15:08:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.21,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:ab055cd3d45f50b90732c14593a5bf50f210871bb4f91994c756fc22db6d922a,ContainerID:containerd://3187d56b982f266dba48e6981dd9e13f162301cb27bb3b0a19b5443df6d0e596,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.56,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:08:48.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6373" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":26,"skipped":391,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:48.626: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-xdlf �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 13 15:08:48.717: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-xdlf" in namespace "subpath-8143" to be "Succeeded or Failed" Jan 13 15:08:48.722: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.649654ms Jan 13 15:08:50.728: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011628341s Jan 13 15:08:52.732: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 4.015273764s Jan 13 15:08:54.736: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 6.018868293s Jan 13 15:08:56.741: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 8.024104367s Jan 13 15:08:58.746: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 10.029479697s Jan 13 15:09:00.752: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 12.034972113s Jan 13 15:09:02.759: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 14.041922301s Jan 13 15:09:04.764: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 16.046929224s Jan 13 15:09:06.769: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 18.052578743s Jan 13 15:09:08.775: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 20.058344157s Jan 13 15:09:10.786: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Running", Reason="", readiness=true. Elapsed: 22.069565153s Jan 13 15:09:12.793: INFO: Pod "pod-subpath-test-projected-xdlf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.075913434s �[1mSTEP�[0m: Saw pod success Jan 13 15:09:12.793: INFO: Pod "pod-subpath-test-projected-xdlf" satisfied condition "Succeeded or Failed" Jan 13 15:09:12.798: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod pod-subpath-test-projected-xdlf container test-container-subpath-projected-xdlf: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:09:12.857: INFO: Waiting for pod pod-subpath-test-projected-xdlf to disappear Jan 13 15:09:12.862: INFO: Pod pod-subpath-test-projected-xdlf no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-xdlf Jan 13 15:09:12.862: INFO: Deleting pod "pod-subpath-test-projected-xdlf" in namespace "subpath-8143" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:12.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-8143" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":-1,"completed":27,"skipped":425,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":67,"skipped":1158,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:08:44.688: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 13 15:08:44.750: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 13 15:08:57.866: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 13 15:09:00.786: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:12.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-737" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":68,"skipped":1158,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:13.078: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating secret secrets-107/secret-test-a864abcb-c5e2-4085-8c8a-85ca54ce6bb8 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 13 15:09:13.150: INFO: Waiting up to 5m0s for pod "pod-configmaps-89fc9632-7de1-4dd5-86c0-f0d1a372fb85" in namespace "secrets-107" to be "Succeeded or Failed" Jan 13 15:09:13.154: INFO: Pod "pod-configmaps-89fc9632-7de1-4dd5-86c0-f0d1a372fb85": Phase="Pending", Reason="", readiness=false. Elapsed: 3.281098ms Jan 13 15:09:15.157: INFO: Pod "pod-configmaps-89fc9632-7de1-4dd5-86c0-f0d1a372fb85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007139546s �[1mSTEP�[0m: Saw pod success Jan 13 15:09:15.158: INFO: Pod "pod-configmaps-89fc9632-7de1-4dd5-86c0-f0d1a372fb85" satisfied condition "Succeeded or Failed" Jan 13 15:09:15.161: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-worker-ceauut pod pod-configmaps-89fc9632-7de1-4dd5-86c0-f0d1a372fb85 container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:09:15.195: INFO: Waiting for pod pod-configmaps-89fc9632-7de1-4dd5-86c0-f0d1a372fb85 to disappear Jan 13 15:09:15.208: INFO: Pod pod-configmaps-89fc9632-7de1-4dd5-86c0-f0d1a372fb85 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:15.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-107" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":69,"skipped":1188,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:12.942: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:09:13.538: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:09:16.584: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:16.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4473" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4473-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":28,"skipped":437,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:15.241: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:09:15.292: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-ab162b18-06f4-487d-b607-85e998e01f34" in namespace "security-context-test-8868" to be "Succeeded or Failed" Jan 13 15:09:15.295: INFO: Pod "busybox-readonly-false-ab162b18-06f4-487d-b607-85e998e01f34": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366993ms Jan 13 15:09:17.301: INFO: Pod "busybox-readonly-false-ab162b18-06f4-487d-b607-85e998e01f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008704504s Jan 13 15:09:17.301: INFO: Pod "busybox-readonly-false-ab162b18-06f4-487d-b607-85e998e01f34" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:17.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-8868" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1194,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:17.327: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 13 15:09:17.395: INFO: Waiting up to 5m0s for pod "downward-api-5e6f1d1c-a41e-4891-a972-66765490dbc4" in namespace "downward-api-3222" to be "Succeeded or Failed" Jan 13 15:09:17.399: INFO: Pod "downward-api-5e6f1d1c-a41e-4891-a972-66765490dbc4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366041ms Jan 13 15:09:19.402: INFO: Pod "downward-api-5e6f1d1c-a41e-4891-a972-66765490dbc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007327154s �[1mSTEP�[0m: Saw pod success Jan 13 15:09:19.403: INFO: Pod "downward-api-5e6f1d1c-a41e-4891-a972-66765490dbc4" satisfied condition "Succeeded or Failed" Jan 13 15:09:19.406: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod downward-api-5e6f1d1c-a41e-4891-a972-66765490dbc4 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:09:19.427: INFO: Waiting for pod downward-api-5e6f1d1c-a41e-4891-a972-66765490dbc4 to disappear Jan 13 15:09:19.430: INFO: Pod downward-api-5e6f1d1c-a41e-4891-a972-66765490dbc4 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:19.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3222" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":71,"skipped":1197,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:19.464: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:09:19.492: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Jan 13 15:09:21.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 create -f -' Jan 13 15:09:22.753: INFO: stderr: "" Jan 13 15:09:22.754: INFO: stdout: "e2e-test-crd-publish-openapi-8545-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 13 15:09:22.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 delete e2e-test-crd-publish-openapi-8545-crds test-foo' Jan 13 15:09:22.846: INFO: stderr: "" Jan 13 15:09:22.846: INFO: stdout: "e2e-test-crd-publish-openapi-8545-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 13 15:09:22.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 apply -f -' Jan 13 15:09:23.102: INFO: stderr: "" Jan 13 15:09:23.102: INFO: stdout: "e2e-test-crd-publish-openapi-8545-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 13 15:09:23.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 delete e2e-test-crd-publish-openapi-8545-crds test-foo' Jan 13 15:09:23.206: INFO: stderr: "" Jan 13 15:09:23.206: INFO: stdout: "e2e-test-crd-publish-openapi-8545-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 13 15:09:23.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 create -f -' Jan 13 15:09:23.433: INFO: rc: 1 Jan 13 15:09:23.433: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 apply -f -' Jan 13 15:09:23.659: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Jan 13 15:09:23.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 create -f -' Jan 13 15:09:23.881: INFO: rc: 1 Jan 13 15:09:23.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 --namespace=crd-publish-openapi-4357 apply -f -' Jan 13 15:09:24.109: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Jan 13 15:09:24.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 explain e2e-test-crd-publish-openapi-8545-crds' Jan 13 15:09:24.330: INFO: stderr: "" Jan 13 15:09:24.330: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8545-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Jan 13 15:09:24.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 explain e2e-test-crd-publish-openapi-8545-crds.metadata' Jan 13 15:09:24.585: INFO: stderr: "" Jan 13 15:09:24.585: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8545-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 13 15:09:24.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 explain e2e-test-crd-publish-openapi-8545-crds.spec' Jan 13 15:09:24.824: INFO: stderr: "" Jan 13 15:09:24.824: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8545-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 13 15:09:24.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 explain e2e-test-crd-publish-openapi-8545-crds.spec.bars' Jan 13 15:09:25.041: INFO: stderr: "" Jan 13 15:09:25.041: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8545-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jan 13 15:09:25.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-4357 explain e2e-test-crd-publish-openapi-8545-crds.spec.bars2' Jan 13 15:09:25.275: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:27.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-4357" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":72,"skipped":1210,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:27.516: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 13 15:09:27.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f40af60-246e-43f4-8714-550b996d24e5" in namespace "downward-api-6111" to be "Succeeded or Failed" Jan 13 15:09:27.558: INFO: Pod "downwardapi-volume-2f40af60-246e-43f4-8714-550b996d24e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.212823ms Jan 13 15:09:29.562: INFO: Pod "downwardapi-volume-2f40af60-246e-43f4-8714-550b996d24e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006754804s �[1mSTEP�[0m: Saw pod success Jan 13 15:09:29.562: INFO: Pod "downwardapi-volume-2f40af60-246e-43f4-8714-550b996d24e5" satisfied condition "Succeeded or Failed" Jan 13 15:09:29.564: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod downwardapi-volume-2f40af60-246e-43f4-8714-550b996d24e5 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:09:29.578: INFO: Waiting for pod downwardapi-volume-2f40af60-246e-43f4-8714-550b996d24e5 to disappear Jan 13 15:09:29.581: INFO: Pod downwardapi-volume-2f40af60-246e-43f4-8714-550b996d24e5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:29.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6111" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1214,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} [BeforeEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:29.590: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-681de3f8-8bcd-4c79-bdcb-cc0a5ed41ef0 [AfterEach] [sig-api-machinery] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:29.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8384" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":74,"skipped":1214,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:29.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:09:30.200: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 13 15:09:32.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219370, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219370, loc:(*time.Location)(0x798e100)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219370, loc:(*time.Location)(0x798e100)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63809219370, loc:(*time.Location)(0x798e100)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6bd9446d55\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 13 15:09:35.227: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be denied by the webhook �[1mSTEP�[0m: create a pod that causes the webhook to hang �[1mSTEP�[0m: create a configmap that should be denied by the webhook �[1mSTEP�[0m: create a configmap that should be admitted by the webhook �[1mSTEP�[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: create a namespace that bypass the webhook �[1mSTEP�[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:09:45.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9883" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9883-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":75,"skipped":1245,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:17.125: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [sig-storage][Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Jan 13 15:09:19.239: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-1254 PodName:var-expansion-fe53a96d-bf80-4452-8ef5-d2ba0c583c43 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:09:19.239: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: test for file in mounted path Jan 13 15:09:19.332: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-1254 PodName:var-expansion-fe53a96d-bf80-4452-8ef5-d2ba0c583c43 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 13 15:09:19.332: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: updating the annotation value Jan 13 15:09:19.950: INFO: Successfully updated pod "var-expansion-fe53a96d-bf80-4452-8ef5-d2ba0c583c43" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 13 15:09:19.953: INFO: Deleting pod "var-expansion-fe53a96d-bf80-4452-8ef5-d2ba0c583c43" in namespace "var-expansion-1254" Jan 13 15:09:19.960: INFO: Wait up to 5m0s for pod "var-expansion-fe53a96d-bf80-4452-8ef5-d2ba0c583c43" to be fully deleted [AfterEach] [k8s.io] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:03.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1254" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][Slow] [Conformance]","total":-1,"completed":29,"skipped":445,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:09:45.453: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:09:45.489: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Pending, waiting for it to be Running (with Ready = true) Jan 13 15:09:47.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:09:49.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:09:51.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:09:53.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:09:55.494: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:09:57.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:09:59.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:10:01.494: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:10:03.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = false) Jan 13 15:10:05.493: INFO: The status of Pod test-webserver-cb179794-8832-4e62-aa4f-c60aa617d62b is Running (Ready = true) Jan 13 15:10:05.498: INFO: Container started at 2023-01-13 15:09:46 +0000 UTC, pod became ready at 2023-01-13 15:10:04 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:05.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-439" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":76,"skipped":1268,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:10:05.555: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-88c04f84-322a-41a0-84ef-83dc66528904 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-88c04f84-322a-41a0-84ef-83dc66528904 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:09.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1809" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":77,"skipped":1293,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:10:04.035: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3821.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-3821.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3821.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 13 15:10:12.231: INFO: DNS probes using dns-3821/dns-test-3cd68d46-3e80-494a-90c5-3fd69b335df9 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:12.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-3821" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":467,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:10:12.344: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: Creating a pod to test override all Jan 13 15:10:12.446: INFO: Waiting up to 5m0s for pod "client-containers-2834fc18-4f37-4803-816f-0b12cf6afd79" in namespace "containers-1301" to be "Succeeded or Failed" Jan 13 15:10:12.450: INFO: Pod "client-containers-2834fc18-4f37-4803-816f-0b12cf6afd79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.209778ms Jan 13 15:10:14.457: INFO: Pod "client-containers-2834fc18-4f37-4803-816f-0b12cf6afd79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010779799s �[1mSTEP�[0m: Saw pod success Jan 13 15:10:14.457: INFO: Pod "client-containers-2834fc18-4f37-4803-816f-0b12cf6afd79" satisfied condition "Succeeded or Failed" Jan 13 15:10:14.461: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4w1i3t-md-0-xqwqf-676754bb9b-krr8s pod client-containers-2834fc18-4f37-4803-816f-0b12cf6afd79 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 13 15:10:14.485: INFO: Waiting for pod client-containers-2834fc18-4f37-4803-816f-0b12cf6afd79 to disappear Jan 13 15:10:14.490: INFO: Pod client-containers-2834fc18-4f37-4803-816f-0b12cf6afd79 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:14.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-1301" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":467,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:10:14.517: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Jan 13 15:10:17.122: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4147 pod-service-account-017dd993-09dd-4b22-87e8-8bbf4f1cfa66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Jan 13 15:10:17.388: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4147 pod-service-account-017dd993-09dd-4b22-87e8-8bbf4f1cfa66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Jan 13 15:10:17.616: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4147 pod-service-account-017dd993-09dd-4b22-87e8-8bbf4f1cfa66 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:17.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-4147" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:10:09.729: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 Jan 13 15:10:09.762: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-7683 I0113 15:10:09.779357 16 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7683, replica count: 1 I0113 15:10:10.830051 16 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 13 15:10:11.028: INFO: Created: latency-svc-f7q2p Jan 13 15:10:11.077: INFO: Got endpoints: latency-svc-f7q2p [147.354344ms] Jan 13 15:10:11.181: INFO: Created: latency-svc-2qw67 Jan 13 15:10:11.198: INFO: Created: latency-svc-67h6k Jan 13 15:10:11.201: INFO: Got endpoints: latency-svc-2qw67 [122.383373ms] Jan 13 15:10:11.208: INFO: Got endpoints: latency-svc-67h6k [129.918487ms] Jan 13 15:10:11.221: INFO: Created: latency-svc-bkjx4 Jan 13 15:10:11.234: INFO: Got endpoints: latency-svc-bkjx4 [156.328441ms] Jan 13 15:10:11.253: INFO: Created: latency-svc-p5llz Jan 13 15:10:11.273: INFO: Created: latency-svc-rq9gd Jan 13 15:10:11.273: INFO: Got endpoints: latency-svc-p5llz [194.953898ms] Jan 13 15:10:11.302: INFO: Created: latency-svc-6txwn Jan 13 15:10:11.302: INFO: Got endpoints: latency-svc-rq9gd [223.802434ms] Jan 13 15:10:11.317: INFO: Got endpoints: latency-svc-6txwn [238.454362ms] Jan 13 15:10:11.337: INFO: Created: latency-svc-54dgm Jan 13 15:10:11.337: INFO: Got endpoints: latency-svc-54dgm [258.695104ms] Jan 13 15:10:11.355: INFO: Created: latency-svc-984d8 Jan 13 15:10:11.363: INFO: Got endpoints: latency-svc-984d8 [284.307986ms] Jan 13 15:10:11.368: INFO: Created: latency-svc-scstt Jan 13 15:10:11.386: INFO: Got endpoints: latency-svc-scstt [307.156483ms] Jan 13 15:10:11.391: INFO: Created: latency-svc-6rmdd Jan 13 15:10:11.398: INFO: Got endpoints: latency-svc-6rmdd [320.137028ms] Jan 13 15:10:11.406: INFO: Created: latency-svc-9frnx Jan 13 15:10:11.418: INFO: Got endpoints: latency-svc-9frnx [339.218812ms] Jan 13 15:10:11.424: INFO: Created: latency-svc-ngzlw Jan 13 15:10:11.431: INFO: Got endpoints: latency-svc-ngzlw [352.78382ms] Jan 13 15:10:11.434: INFO: Created: latency-svc-b22tn Jan 13 15:10:11.448: INFO: Got endpoints: latency-svc-b22tn [369.775992ms] Jan 13 15:10:11.461: INFO: Created: latency-svc-vptkc Jan 13 15:10:11.470: INFO: Got endpoints: latency-svc-vptkc [391.583704ms] Jan 13 15:10:11.481: INFO: Created: latency-svc-7bxtz Jan 13 15:10:11.496: INFO: Got endpoints: latency-svc-7bxtz [417.450766ms] Jan 13 15:10:11.502: INFO: Created: latency-svc-bg5lm Jan 13 15:10:11.504: INFO: Got endpoints: latency-svc-bg5lm [303.436839ms] Jan 13 15:10:11.513: INFO: Created: latency-svc-qvmdv Jan 13 15:10:11.529: INFO: Created: latency-svc-fhsq2 Jan 13 15:10:11.529: INFO: Got endpoints: latency-svc-qvmdv [320.5801ms] Jan 13 15:10:11.534: INFO: Got endpoints: latency-svc-fhsq2 [30.061427ms] Jan 13 15:10:11.542: INFO: Created: latency-svc-8rnxx Jan 13 15:10:11.554: INFO: Created: latency-svc-bxgt4 Jan 13 15:10:11.555: INFO: Got endpoints: latency-svc-8rnxx [320.473207ms] Jan 13 15:10:11.563: INFO: Got endpoints: latency-svc-bxgt4 [290.20544ms] Jan 13 15:10:11.565: INFO: Created: latency-svc-g89r4 Jan 13 15:10:11.575: INFO: Got endpoints: latency-svc-g89r4 [273.505292ms] Jan 13 15:10:11.585: INFO: Created: latency-svc-r8kb8 Jan 13 15:10:11.590: INFO: Got endpoints: latency-svc-r8kb8 [273.60209ms] Jan 13 15:10:11.604: INFO: Created: latency-svc-mr7hg Jan 13 15:10:11.614: INFO: Got endpoints: latency-svc-mr7hg [277.267929ms] Jan 13 15:10:11.619: INFO: Created: latency-svc-tqfgt Jan 13 15:10:11.630: INFO: Got endpoints: latency-svc-tqfgt [266.966336ms] Jan 13 15:10:11.631: INFO: Created: latency-svc-x8z5k Jan 13 15:10:11.636: INFO: Got endpoints: latency-svc-x8z5k [250.260586ms] Jan 13 15:10:11.647: INFO: Created: latency-svc-klw7l Jan 13 15:10:11.667: INFO: Got endpoints: latency-svc-klw7l [267.948014ms] Jan 13 15:10:11.670: INFO: Created: latency-svc-rx7sq Jan 13 15:10:11.675: INFO: Got endpoints: latency-svc-rx7sq [257.122549ms] Jan 13 15:10:11.683: INFO: Created: latency-svc-5hmdz Jan 13 15:10:11.692: INFO: Got endpoints: latency-svc-5hmdz [260.457428ms] Jan 13 15:10:11.694: INFO: Created: latency-svc-rmjnp Jan 13 15:10:11.704: INFO: Created: latency-svc-ltflk Jan 13 15:10:11.704: INFO: Got endpoints: latency-svc-rmjnp [255.572423ms] Jan 13 15:10:11.716: INFO: Got endpoints: latency-svc-ltflk [245.729333ms] Jan 13 15:10:11.719: INFO: Created: latency-svc-26q5m Jan 13 15:10:11.726: INFO: Got endpoints: latency-svc-26q5m [229.761639ms] Jan 13 15:10:11.735: INFO: Created: latency-svc-vhcr8 Jan 13 15:10:11.741: INFO: Got endpoints: latency-svc-vhcr8 [212.280396ms] Jan 13 15:10:11.750: INFO: Created: latency-svc-bwdh9 Jan 13 15:10:11.760: INFO: Created: latency-svc-6tdqg Jan 13 15:10:11.761: INFO: Got endpoints: latency-svc-bwdh9 [226.406914ms] Jan 13 15:10:11.770: INFO: Got endpoints: latency-svc-6tdqg [215.160689ms] Jan 13 15:10:11.774: INFO: Created: latency-svc-tghhm Jan 13 15:10:11.784: INFO: Got endpoints: latency-svc-tghhm [220.406853ms] Jan 13 15:10:11.788: INFO: Created: latency-svc-mpw8r Jan 13 15:10:11.797: INFO: Got endpoints: latency-svc-mpw8r [221.13242ms] Jan 13 15:10:11.799: INFO: Created: latency-svc-98hhd Jan 13 15:10:11.811: INFO: Created: latency-svc-t42rp Jan 13 15:10:11.814: INFO: Got endpoints: latency-svc-98hhd [223.707147ms] Jan 13 15:10:11.830: INFO: Got endpoints: latency-svc-t42rp [215.880905ms] Jan 13 15:10:11.838: INFO: Created: latency-svc-fmzqq Jan 13 15:10:11.865: INFO: Got endpoints: latency-svc-fmzqq [234.913203ms] Jan 13 15:10:11.872: INFO: Created: latency-svc-k7crt Jan 13 15:10:11.879: INFO: Got endpoints: latency-svc-k7crt [242.861398ms] Jan 13 15:10:11.902: INFO: Created: latency-svc-wj9gv Jan 13 15:10:11.911: INFO: Got endpoints: latency-svc-wj9gv [244.083834ms] Jan 13 15:10:11.928: INFO: Created: latency-svc-pd5mb Jan 13 15:10:11.941: INFO: Got endpoints: latency-svc-pd5mb [265.511976ms] Jan 13 15:10:11.944: INFO: Created: latency-svc-92m4q Jan 13 15:10:11.954: INFO: Got endpoints: latency-svc-92m4q [261.849851ms] Jan 13 15:10:11.959: INFO: Created: latency-svc-8zbnr Jan 13 15:10:11.967: INFO: Created: latency-svc-h4fxp Jan 13 15:10:11.967: INFO: Got endpoints: latency-svc-8zbnr [263.787536ms] Jan 13 15:10:11.974: INFO: Got endpoints: latency-svc-h4fxp [257.761213ms] Jan 13 15:10:11.985: INFO: Created: latency-svc-4f7kz Jan 13 15:10:11.994: INFO: Got endpoints: latency-svc-4f7kz [267.075549ms] Jan 13 15:10:11.996: INFO: Created: latency-svc-kpmv2 Jan 13 15:10:12.009: INFO: Created: latency-svc-jswxv Jan 13 15:10:12.019: INFO: Created: latency-svc-m2p45 Jan 13 15:10:12.031: INFO: Created: latency-svc-gggtz Jan 13 15:10:12.046: INFO: Created: latency-svc-fx47l Jan 13 15:10:12.064: INFO: Got endpoints: latency-svc-kpmv2 [323.208788ms] Jan 13 15:10:12.071: INFO: Created: latency-svc-r4mmt Jan 13 15:10:12.084: INFO: Created: latency-svc-pthjp Jan 13 15:10:12.089: INFO: Got endpoints: latency-svc-jswxv [327.832713ms] Jan 13 15:10:12.110: INFO: Created: latency-svc-s622q Jan 13 15:10:12.124: INFO: Created: latency-svc-8jhjz Jan 13 15:10:12.133: INFO: Created: latency-svc-6vxbr Jan 13 15:10:12.144: INFO: Created: latency-svc-l8cmb Jan 13 15:10:12.145: INFO: Got endpoints: latency-svc-m2p45 [375.063942ms] Jan 13 15:10:12.161: INFO: Created: latency-svc-4j4nx Jan 13 15:10:12.200: INFO: Got endpoints: latency-svc-gggtz [416.201321ms] Jan 13 15:10:12.204: INFO: Created: latency-svc-2spx8 Jan 13 15:10:12.216: INFO: Created: latency-svc-9t2ml Jan 13 15:10:12.229: INFO: Created: latency-svc-d74lh Jan 13 15:10:12.241: INFO: Created: latency-svc-pnpxs Jan 13 15:10:12.243: INFO: Got endpoints: latency-svc-fx47l [446.204454ms] Jan 13 15:10:12.264: INFO: Created: latency-svc-sxd89 Jan 13 15:10:12.277: INFO: Created: latency-svc-lbqwl Jan 13 15:10:12.295: INFO: Got endpoints: latency-svc-r4mmt [480.450733ms] Jan 13 15:10:12.315: INFO: Created: latency-svc-4knpt Jan 13 15:10:12.327: INFO: Created: latency-svc-lpgld Jan 13 15:10:12.339: INFO: Got endpoints: latency-svc-pthjp [508.92141ms] Jan 13 15:10:12.341: INFO: Created: latency-svc-tn98q Jan 13 15:10:12.371: INFO: Created: latency-svc-x8b8k Jan 13 15:10:12.389: INFO: Got endpoints: latency-svc-s622q [524.259686ms] Jan 13 15:10:12.409: INFO: Created: latency-svc-7qzz4 Jan 13 15:10:12.440: INFO: Got endpoints: latency-svc-8jhjz [560.848739ms] Jan 13 15:10:12.462: INFO: Created: latency-svc-6fnx5 Jan 13 15:10:12.490: INFO: Got endpoints: latency-svc-6vxbr [579.610642ms] Jan 13 15:10:12.512: INFO: Created: latency-svc-vf6ld Jan 13 15:10:12.538: INFO: Got endpoints: latency-svc-l8cmb [597.273807ms] Jan 13 15:10:12.555: INFO: Created: latency-svc-vkc7b Jan 13 15:10:12.588: INFO: Got endpoints: latency-svc-4j4nx [634.594512ms] Jan 13 15:10:12.603: INFO: Created: latency-svc-lrcd6 Jan 13 15:10:12.642: INFO: Got endpoints: latency-svc-2spx8 [674.487494ms] Jan 13 15:10:12.694: INFO: Created: latency-svc-55s85 Jan 13 15:10:12.701: INFO: Got endpoints: latency-svc-9t2ml [726.509822ms] Jan 13 15:10:12.737: INFO: Created: latency-svc-qwtzs Jan 13 15:10:12.755: INFO: Got endpoints: latency-svc-d74lh [761.401053ms] Jan 13 15:10:12.783: INFO: Created: latency-svc-wrzqr Jan 13 15:10:12.796: INFO: Got endpoints: latency-svc-pnpxs [731.527025ms] Jan 13 15:10:12.835: INFO: Created: latency-svc-bkjpb Jan 13 15:10:12.891: INFO: Got endpoints: latency-svc-sxd89 [802.193031ms] Jan 13 15:10:12.913: INFO: Created: latency-svc-246fs Jan 13 15:10:12.939: INFO: Got endpoints: latency-svc-lbqwl [793.849194ms] Jan 13 15:10:12.962: INFO: Created: latency-svc-p779t Jan 13 15:10:12.993: INFO: Got endpoints: latency-svc-4knpt [792.926806ms] Jan 13 15:10:13.012: INFO: Created: latency-svc-q7pmv Jan 13 15:10:13.038: INFO: Got endpoints: latency-svc-lpgld [795.482158ms] Jan 13 15:10:13.054: INFO: Created: latency-svc-mp4ng Jan 13 15:10:13.092: INFO: Got endpoints: latency-svc-tn98q [796.861338ms] Jan 13 15:10:13.107: INFO: Created: latency-svc-whtpm Jan 13 15:10:13.138: INFO: Got endpoints: latency-svc-x8b8k [798.66117ms] Jan 13 15:10:13.153: INFO: Created: latency-svc-h4jv6 Jan 13 15:10:13.191: INFO: Got endpoints: latency-svc-7qzz4 [802.12085ms] Jan 13 15:10:13.220: INFO: Created: latency-svc-btbxg Jan 13 15:10:13.240: INFO: Got endpoints: latency-svc-6fnx5 [799.579964ms] Jan 13 15:10:13.266: INFO: Created: latency-svc-jg9tv Jan 13 15:10:13.288: INFO: Got endpoints: latency-svc-vf6ld [797.916475ms] Jan 13 15:10:13.308: INFO: Created: latency-svc-bf574 Jan 13 15:10:13.342: INFO: Got endpoints: latency-svc-vkc7b [803.579768ms] Jan 13 15:10:13.360: INFO: Created: latency-svc-tj2cd Jan 13 15:10:13.389: INFO: Got endpoints: latency-svc-lrcd6 [800.526385ms] Jan 13 15:10:13.407: INFO: Created: latency-svc-crprk Jan 13 15:10:13.437: INFO: Got endpoints: latency-svc-55s85 [794.953036ms] Jan 13 15:10:13.456: INFO: Created: latency-svc-m5zxd Jan 13 15:10:13.488: INFO: Got endpoints: latency-svc-qwtzs [787.47766ms] Jan 13 15:10:13.516: INFO: Created: latency-svc-ttkmv Jan 13 15:10:13.537: INFO: Got endpoints: latency-svc-wrzqr [782.031876ms] Jan 13 15:10:13.553: INFO: Created: latency-svc-zflwn Jan 13 15:10:13.588: INFO: Got endpoints: latency-svc-bkjpb [791.648093ms] Jan 13 15:10:13.600: INFO: Created: latency-svc-z5x2z Jan 13 15:10:13.638: INFO: Got endpoints: latency-svc-246fs [746.903118ms] Jan 13 15:10:13.653: INFO: Created: latency-svc-25kls Jan 13 15:10:13.689: INFO: Got endpoints: latency-svc-p779t [750.193886ms] Jan 13 15:10:13.708: INFO: Created: latency-svc-f59ls Jan 13 15:10:13.739: INFO: Got endpoints: latency-svc-q7pmv [745.634962ms] Jan 13 15:10:13.752: INFO: Created: latency-svc-295gd Jan 13 15:10:13.788: INFO: Got endpoints: latency-svc-mp4ng [749.663471ms] Jan 13 15:10:13.802: INFO: Created: latency-svc-r2lgj Jan 13 15:10:13.839: INFO: Got endpoints: latency-svc-whtpm [747.166297ms] Jan 13 15:10:13.855: INFO: Created: latency-svc-6cfxj Jan 13 15:10:13.889: INFO: Got endpoints: latency-svc-h4jv6 [750.782768ms] Jan 13 15:10:13.906: INFO: Created: latency-svc-b9px5 Jan 13 15:10:13.937: INFO: Got endpoints: latency-svc-btbxg [745.428997ms] Jan 13 15:10:13.949: INFO: Created: latency-svc-cstqn Jan 13 15:10:13.988: INFO: Got endpoints: latency-svc-jg9tv [747.910394ms] Jan 13 15:10:14.000: INFO: Created: latency-svc-w8sh5 Jan 13 15:10:14.045: INFO: Got endpoints: latency-svc-bf574 [756.151259ms] Jan 13 15:10:14.067: INFO: Created: latency-svc-zwh58 Jan 13 15:10:14.090: INFO: Got endpoints: latency-svc-tj2cd [747.735492ms] Jan 13 15:10:14.119: INFO: Created: latency-svc-wh65n Jan 13 15:10:14.137: INFO: Got endpoints: latency-svc-crprk [746.807974ms] Jan 13 15:10:14.159: INFO: Created: latency-svc-qtzqt Jan 13 15:10:14.192: INFO: Got endpoints: latency-svc-m5zxd [754.274531ms] Jan 13 15:10:14.207: INFO: Created: latency-svc-g7xlb Jan 13 15:10:14.238: INFO: Got endpoints: latency-svc-ttkmv [749.553561ms] Jan 13 15:10:14.261: INFO: Created: latency-svc-r7cmj Jan 13 15:10:14.287: INFO: Got endpoints: latency-svc-zflwn [750.110087ms] Jan 13 15:10:14.304: INFO: Created: latency-svc-82lxb Jan 13 15:10:14.337: INFO: Got endpoints: latency-svc-z5x2z [749.380005ms] Jan 13 15:10:14.363: INFO: Created: latency-svc-6r977 Jan 13 15:10:14.388: INFO: Got endpoints: latency-svc-25kls [750.432106ms] Jan 13 15:10:14.402: INFO: Created: latency-svc-5d9bq Jan 13 15:10:14.437: INFO: Got endpoints: latency-svc-f59ls [748.130448ms] Jan 13 15:10:14.458: INFO: Created: latency-svc-qdcvk Jan 13 15:10:14.488: INFO: Got endpoints: latency-svc-295gd [749.064133ms] Jan 13 15:10:14.505: INFO: Created: latency-svc-tvrnx Jan 13 15:10:14.537: INFO: Got endpoints: latency-svc-r2lgj [748.69019ms] Jan 13 15:10:14.555: INFO: Created: latency-svc-gdcnn Jan 13 15:10:14.588: INFO: Got endpoints: latency-svc-6cfxj [749.403386ms] Jan 13 15:10:14.609: INFO: Created: latency-svc-l2b7r Jan 13 15:10:14.638: INFO: Got endpoints: latency-svc-b9px5 [748.468448ms] Jan 13 15:10:14.658: INFO: Created: latency-svc-pjgk5 Jan 13 15:10:14.695: INFO: Got endpoints: latency-svc-cstqn [758.022823ms] Jan 13 15:10:14.727: INFO: Created: latency-svc-kjbh7 Jan 13 15:10:14.737: INFO: Got endpoints: latency-svc-w8sh5 [749.562861ms] Jan 13 15:10:14.767: INFO: Created: latency-svc-l6tl6 Jan 13 15:10:14.789: INFO: Got endpoints: latency-svc-zwh58 [743.298767ms] Jan 13 15:10:14.820: INFO: Created: latency-svc-spgq5 Jan 13 15:10:14.838: INFO: Got endpoints: latency-svc-wh65n [748.365254ms] Jan 13 15:10:14.860: INFO: Created: latency-svc-lhml4 Jan 13 15:10:14.888: INFO: Got endpoints: latency-svc-qtzqt [750.335086ms] Jan 13 15:10:14.912: INFO: Created: latency-svc-kttm9 Jan 13 15:10:14.941: INFO: Got endpoints: latency-svc-g7xlb [748.90777ms] Jan 13 15:10:14.971: INFO: Created: latency-svc-7h54q Jan 13 15:10:14.989: INFO: Got endpoints: latency-svc-r7cmj [751.256227ms] Jan 13 15:10:15.026: INFO: Created: latency-svc-kv2jp Jan 13 15:10:15.045: INFO: Got endpoints: latency-svc-82lxb [757.131241ms] Jan 13 15:10:15.074: INFO: Created: latency-svc-8qhzt Jan 13 15:10:15.095: INFO: Got endpoints: latency-svc-6r977 [757.568226ms] Jan 13 15:10:15.131: INFO: Created: latency-svc-q8g2v Jan 13 15:10:15.143: INFO: Got endpoints: latency-svc-5d9bq [754.259332ms] Jan 13 15:10:15.169: INFO: Created: latency-svc-q76wj Jan 13 15:10:15.191: INFO: Got endpoints: latency-svc-qdcvk [753.492364ms] Jan 13 15:10:15.208: INFO: Created: latency-svc-qjl55 Jan 13 15:10:15.238: INFO: Got endpoints: latency-svc-tvrnx [749.787062ms] Jan 13 15:10:15.255: INFO: Created: latency-svc-s68fc Jan 13 15:10:15.287: INFO: Got endpoints: latency-svc-gdcnn [750.104811ms] Jan 13 15:10:15.299: INFO: Created: latency-svc-glf8t Jan 13 15:10:15.342: INFO: Got endpoints: latency-svc-l2b7r [753.559824ms] Jan 13 15:10:15.357: INFO: Created: latency-svc-q4fsc Jan 13 15:10:15.387: INFO: Got endpoints: latency-svc-pjgk5 [749.250072ms] Jan 13 15:10:15.401: INFO: Created: latency-svc-h9rrk Jan 13 15:10:15.438: INFO: Got endpoints: latency-svc-kjbh7 [742.867596ms] Jan 13 15:10:15.458: INFO: Created: latency-svc-fbn9l Jan 13 15:10:15.494: INFO: Got endpoints: latency-svc-l6tl6 [756.538489ms] Jan 13 15:10:15.513: INFO: Created: latency-svc-vxnl5 Jan 13 15:10:15.541: INFO: Got endpoints: latency-svc-spgq5 [752.003517ms] Jan 13 15:10:15.559: INFO: Created: latency-svc-cxtv7 Jan 13 15:10:15.588: INFO: Got endpoints: latency-svc-lhml4 [749.06589ms] Jan 13 15:10:15.602: INFO: Created: latency-svc-4zrmf Jan 13 15:10:15.638: INFO: Got endpoints: latency-svc-kttm9 [749.694514ms] Jan 13 15:10:15.651: INFO: Created: latency-svc-6n8v9 Jan 13 15:10:15.687: INFO: Got endpoints: latency-svc-7h54q [746.402012ms] Jan 13 15:10:15.699: INFO: Created: latency-svc-vqk2z Jan 13 15:10:15.738: INFO: Got endpoints: latency-svc-kv2jp [748.519896ms] Jan 13 15:10:15.751: INFO: Created: latency-svc-l5g6h Jan 13 15:10:15.787: INFO: Got endpoints: latency-svc-8qhzt [742.489491ms] Jan 13 15:10:15.804: INFO: Created: latency-svc-cwtx5 Jan 13 15:10:15.838: INFO: Got endpoints: latency-svc-q8g2v [742.571445ms] Jan 13 15:10:15.851: INFO: Created: latency-svc-sxdfz Jan 13 15:10:15.887: INFO: Got endpoints: latency-svc-q76wj [744.526473ms] Jan 13 15:10:15.906: INFO: Created: latency-svc-5pq86 Jan 13 15:10:15.938: INFO: Got endpoints: latency-svc-qjl55 [747.311748ms] Jan 13 15:10:15.956: INFO: Created: latency-svc-r484p Jan 13 15:10:15.990: INFO: Got endpoints: latency-svc-s68fc [751.985879ms] Jan 13 15:10:16.014: INFO: Created: latency-svc-78hd2 Jan 13 15:10:16.038: INFO: Got endpoints: latency-svc-glf8t [751.218762ms] Jan 13 15:10:16.054: INFO: Created: latency-svc-n4rwk Jan 13 15:10:16.089: INFO: Got endpoints: latency-svc-q4fsc [746.89723ms] Jan 13 15:10:16.118: INFO: Created: latency-svc-29j9d Jan 13 15:10:16.139: INFO: Got endpoints: latency-svc-h9rrk [751.10756ms] Jan 13 15:10:16.162: INFO: Created: latency-svc-mnxss Jan 13 15:10:16.191: INFO: Got endpoints: latency-svc-fbn9l [752.537317ms] Jan 13 15:10:16.215: INFO: Created: latency-svc-dq8pp Jan 13 15:10:16.239: INFO: Got endpoints: latency-svc-vxnl5 [744.009392ms] Jan 13 15:10:16.253: INFO: Created: latency-svc-6mklc Jan 13 15:10:16.287: INFO: Got endpoints: latency-svc-cxtv7 [746.0131ms] Jan 13 15:10:16.298: INFO: Created: latency-svc-hvnzb Jan 13 15:10:16.337: INFO: Got endpoints: latency-svc-4zrmf [749.081578ms] Jan 13 15:10:16.349: INFO: Created: latency-svc-ck2q7 Jan 13 15:10:16.387: INFO: Got endpoints: latency-svc-6n8v9 [748.847879ms] Jan 13 15:10:16.401: INFO: Created: latency-svc-62cpr Jan 13 15:10:16.438: INFO: Got endpoints: latency-svc-vqk2z [751.104073ms] Jan 13 15:10:16.453: INFO: Created: latency-svc-xx5d4 Jan 13 15:10:16.488: INFO: Got endpoints: latency-svc-l5g6h [749.504398ms] Jan 13 15:10:16.501: INFO: Created: latency-svc-jq59r Jan 13 15:10:16.537: INFO: Got endpoints: latency-svc-cwtx5 [750.098297ms] Jan 13 15:10:16.553: INFO: Created: latency-svc-fhcsg Jan 13 15:10:16.587: INFO: Got endpoints: latency-svc-sxdfz [749.815928ms] Jan 13 15:10:16.601: INFO: Created: latency-svc-tdjd7 Jan 13 15:10:16.640: INFO: Got endpoints: latency-svc-5pq86 [752.193543ms] Jan 13 15:10:16.653: INFO: Created: latency-svc-s9w6s Jan 13 15:10:16.687: INFO: Got endpoints: latency-svc-r484p [748.529418ms] Jan 13 15:10:16.698: INFO: Created: latency-svc-9rvxv Jan 13 15:10:16.738: INFO: Got endpoints: latency-svc-78hd2 [747.91325ms] Jan 13 15:10:16.754: INFO: Created: latency-svc-8bq6g Jan 13 15:10:16.788: INFO: Got endpoints: latency-svc-n4rwk [749.378885ms] Jan 13 15:10:16.799: INFO: Created: latency-svc-wtdfx Jan 13 15:10:16.838: INFO: Got endpoints: latency-svc-29j9d [748.145457ms] Jan 13 15:10:16.855: INFO: Created: latency-svc-d66mb Jan 13 15:10:16.887: INFO: Got endpoints: latency-svc-mnxss [748.76307ms] Jan 13 15:10:16.903: INFO: Created: latency-svc-bvxss Jan 13 15:10:16.940: INFO: Got endpoints: latency-svc-dq8pp [749.602498ms] Jan 13 15:10:16.954: INFO: Created: latency-svc-g57zk Jan 13 15:10:16.988: INFO: Got endpoints: latency-svc-6mklc [748.983943ms] Jan 13 15:10:17.001: INFO: Created: latency-svc-qzvxg Jan 13 15:10:17.037: INFO: Got endpoints: latency-svc-hvnzb [749.792151ms] Jan 13 15:10:17.048: INFO: Created: latency-svc-kfqwm Jan 13 15:10:17.091: INFO: Got endpoints: latency-svc-ck2q7 [753.81049ms] Jan 13 15:10:17.110: INFO: Created: latency-svc-cdfsp Jan 13 15:10:17.139: INFO: Got endpoints: latency-svc-62cpr [750.88331ms] Jan 13 15:10:17.155: INFO: Created: latency-svc-s5shh Jan 13 15:10:17.190: INFO: Got endpoints: latency-svc-xx5d4 [751.890697ms] Jan 13 15:10:17.215: INFO: Created: latency-svc-cs2mv Jan 13 15:10:17.238: INFO: Got endpoints: latency-svc-jq59r [749.853619ms] Jan 13 15:10:17.254: INFO: Created: latency-svc-hnhgw Jan 13 15:10:17.286: INFO: Got endpoints: latency-svc-fhcsg [748.756276ms] Jan 13 15:10:17.299: INFO: Created: latency-svc-5w6xq Jan 13 15:10:17.337: INFO: Got endpoints: latency-svc-tdjd7 [749.370484ms] Jan 13 15:10:17.348: INFO: Created: latency-svc-rhkc4 Jan 13 15:10:17.393: INFO: Got endpoints: latency-svc-s9w6s [752.835194ms] Jan 13 15:10:17.406: INFO: Created: latency-svc-hjf9f Jan 13 15:10:17.437: INFO: Got endpoints: latency-svc-9rvxv [749.528164ms] Jan 13 15:10:17.452: INFO: Created: latency-svc-ctscl Jan 13 15:10:17.489: INFO: Got endpoints: latency-svc-8bq6g [750.61854ms] Jan 13 15:10:17.510: INFO: Created: latency-svc-bsl9q Jan 13 15:10:17.538: INFO: Got endpoints: latency-svc-wtdfx [750.132186ms] Jan 13 15:10:17.552: INFO: Created: latency-svc-qsrb9 Jan 13 15:10:17.588: INFO: Got endpoints: latency-svc-d66mb [749.953302ms] Jan 13 15:10:17.600: INFO: Created: latency-svc-bhr62 Jan 13 15:10:17.637: INFO: Got endpoints: latency-svc-bvxss [749.971843ms] Jan 13 15:10:17.657: INFO: Created: latency-svc-ktvkz Jan 13 15:10:17.687: INFO: Got endpoints: latency-svc-g57zk [746.832894ms] Jan 13 15:10:17.701: INFO: Created: latency-svc-d2p5h Jan 13 15:10:17.755: INFO: Got endpoints: latency-svc-qzvxg [766.682949ms] Jan 13 15:10:17.773: INFO: Created: latency-svc-fvphn Jan 13 15:10:17.789: INFO: Got endpoints: latency-svc-kfqwm [752.744748ms] Jan 13 15:10:17.809: INFO: Created: latency-svc-rqrn9 Jan 13 15:10:17.839: INFO: Got endpoints: latency-svc-cdfsp [748.007385ms] Jan 13 15:10:17.863: INFO: Created: latency-svc-gxsfm Jan 13 15:10:17.891: INFO: Got endpoints: latency-svc-s5shh [752.535291ms] Jan 13 15:10:17.906: INFO: Created: latency-svc-q757k Jan 13 15:10:17.939: INFO: Got endpoints: latency-svc-cs2mv [748.810679ms] Jan 13 15:10:17.956: INFO: Created: latency-svc-g5glj Jan 13 15:10:17.989: INFO: Got endpoints: latency-svc-hnhgw [750.880563ms] Jan 13 15:10:18.007: INFO: Created: latency-svc-s74cz Jan 13 15:10:18.038: INFO: Got endpoints: latency-svc-5w6xq [751.999772ms] Jan 13 15:10:18.056: INFO: Created: latency-svc-hw524 Jan 13 15:10:18.093: INFO: Got endpoints: latency-svc-rhkc4 [756.309755ms] Jan 13 15:10:18.128: INFO: Created: latency-svc-5djbv Jan 13 15:10:18.140: INFO: Got endpoints: latency-svc-hjf9f [746.893826ms] Jan 13 15:10:18.180: INFO: Created: latency-svc-5q76h Jan 13 15:10:18.236: INFO: Got endpoints: latency-svc-ctscl [799.093235ms] Jan 13 15:10:18.284: INFO: Got endpoints: latency-svc-bsl9q [795.527224ms] Jan 13 15:10:18.320: INFO: Got endpoints: latency-svc-qsrb9 [782.100187ms] Jan 13 15:10:18.322: INFO: Created: latency-svc-bfkmj Jan 13 15:10:18.362: INFO: Created: latency-svc-gjgh5 Jan 13 15:10:18.362: INFO: Got endpoints: latency-svc-bhr62 [774.478849ms] Jan 13 15:10:18.414: INFO: Got endpoints: latency-svc-ktvkz [776.347308ms] Jan 13 15:10:18.416: INFO: Created: latency-svc-sm27v Jan 13 15:10:18.444: INFO: Got endpoints: latency-svc-d2p5h [756.803729ms] Jan 13 15:10:18.445: INFO: Created: latency-svc-p6dfh Jan 13 15:10:18.454: INFO: Created: latency-svc-c9n49 Jan 13 15:10:18.465: INFO: Created: latency-svc-j7hrm Jan 13 15:10:18.488: INFO: Got endpoints: latency-svc-fvphn [733.075786ms] Jan 13 15:10:18.500: INFO: Created: latency-svc-c9krw Jan 13 15:10:18.537: INFO: Got endpoints: latency-svc-rqrn9 [747.673296ms] Jan 13 15:10:18.555: INFO: Created: latency-svc-czsvs Jan 13 15:10:18.588: INFO: Got endpoints: latency-svc-gxsfm [748.567989ms] Jan 13 15:10:18.600: INFO: Created: latency-svc-mmcsg Jan 13 15:10:18.638: INFO: Got endpoints: latency-svc-q757k [746.256402ms] Jan 13 15:10:18.654: INFO: Created: latency-svc-czscw Jan 13 15:10:18.688: INFO: Got endpoints: latency-svc-g5glj [748.175229ms] Jan 13 15:10:18.705: INFO: Created: latency-svc-tpdtf Jan 13 15:10:18.737: INFO: Got endpoints: latency-svc-s74cz [747.427375ms] Jan 13 15:10:18.756: INFO: Created: latency-svc-fqngh Jan 13 15:10:18.789: INFO: Got endpoints: latency-svc-hw524 [750.222357ms] Jan 13 15:10:18.819: INFO: Created: latency-svc-4smsp Jan 13 15:10:18.850: INFO: Got endpoints: latency-svc-5djbv [756.289441ms] Jan 13 15:10:18.871: INFO: Created: latency-svc-h99rx Jan 13 15:10:18.893: INFO: Got endpoints: latency-svc-5q76h [753.505054ms] Jan 13 15:10:18.909: INFO: Created: latency-svc-rl8d6 Jan 13 15:10:18.938: INFO: Got endpoints: latency-svc-bfkmj [702.531639ms] Jan 13 15:10:18.974: INFO: Created: latency-svc-9nblt Jan 13 15:10:18.990: INFO: Got endpoints: latency-svc-gjgh5 [705.788399ms] Jan 13 15:10:19.004: INFO: Created: latency-svc-d7vdw Jan 13 15:10:19.039: INFO: Got endpoints: latency-svc-sm27v [718.341629ms] Jan 13 15:10:19.093: INFO: Got endpoints: latency-svc-p6dfh [730.503181ms] Jan 13 15:10:19.139: INFO: Got endpoints: latency-svc-c9n49 [724.641193ms] Jan 13 15:10:19.189: INFO: Got endpoints: latency-svc-j7hrm [744.132717ms] Jan 13 15:10:19.239: INFO: Got endpoints: latency-svc-c9krw [751.076971ms] Jan 13 15:10:19.288: INFO: Got endpoints: latency-svc-czsvs [750.752397ms] Jan 13 15:10:19.338: INFO: Got endpoints: latency-svc-mmcsg [749.871141ms] Jan 13 15:10:19.388: INFO: Got endpoints: latency-svc-czscw [749.870094ms] Jan 13 15:10:19.438: INFO: Got endpoints: latency-svc-tpdtf [750.554467ms] Jan 13 15:10:19.488: INFO: Got endpoints: latency-svc-fqngh [750.745608ms] Jan 13 15:10:19.538: INFO: Got endpoints: latency-svc-4smsp [749.199171ms] Jan 13 15:10:19.587: INFO: Got endpoints: latency-svc-h99rx [736.897604ms] Jan 13 15:10:19.636: INFO: Got endpoints: latency-svc-rl8d6 [743.03893ms] Jan 13 15:10:19.687: INFO: Got endpoints: latency-svc-9nblt [748.888489ms] Jan 13 15:10:19.738: INFO: Got endpoints: latency-svc-d7vdw [747.585837ms] Jan 13 15:10:19.738: INFO: Latencies: [30.061427ms 122.383373ms 129.918487ms 156.328441ms 194.953898ms 212.280396ms 215.160689ms 215.880905ms 220.406853ms 221.13242ms 223.707147ms 223.802434ms 226.406914ms 229.761639ms 234.913203ms 238.454362ms 242.861398ms 244.083834ms 245.729333ms 250.260586ms 255.572423ms 257.122549ms 257.761213ms 258.695104ms 260.457428ms 261.849851ms 263.787536ms 265.511976ms 266.966336ms 267.075549ms 267.948014ms 273.505292ms 273.60209ms 277.267929ms 284.307986ms 290.20544ms 303.436839ms 307.156483ms 320.137028ms 320.473207ms 320.5801ms 323.208788ms 327.832713ms 339.218812ms 352.78382ms 369.775992ms 375.063942ms 391.583704ms 416.201321ms 417.450766ms 446.204454ms 480.450733ms 508.92141ms 524.259686ms 560.848739ms 579.610642ms 597.273807ms 634.594512ms 674.487494ms 702.531639ms 705.788399ms 718.341629ms 724.641193ms 726.509822ms 730.503181ms 731.527025ms 733.075786ms 736.897604ms 742.489491ms 742.571445ms 742.867596ms 743.03893ms 743.298767ms 744.009392ms 744.132717ms 744.526473ms 745.428997ms 745.634962ms 746.0131ms 746.256402ms 746.402012ms 746.807974ms 746.832894ms 746.893826ms 746.89723ms 746.903118ms 747.166297ms 747.311748ms 747.427375ms 747.585837ms 747.673296ms 747.735492ms 747.910394ms 747.91325ms 748.007385ms 748.130448ms 748.145457ms 748.175229ms 748.365254ms 748.468448ms 748.519896ms 748.529418ms 748.567989ms 748.69019ms 748.756276ms 748.76307ms 748.810679ms 748.847879ms 748.888489ms 748.90777ms 748.983943ms 749.064133ms 749.06589ms 749.081578ms 749.199171ms 749.250072ms 749.370484ms 749.378885ms 749.380005ms 749.403386ms 749.504398ms 749.528164ms 749.553561ms 749.562861ms 749.602498ms 749.663471ms 749.694514ms 749.787062ms 749.792151ms 749.815928ms 749.853619ms 749.870094ms 749.871141ms 749.953302ms 749.971843ms 750.098297ms 750.104811ms 750.110087ms 750.132186ms 750.193886ms 750.222357ms 750.335086ms 750.432106ms 750.554467ms 750.61854ms 750.745608ms 750.752397ms 750.782768ms 750.880563ms 750.88331ms 751.076971ms 751.104073ms 751.10756ms 751.218762ms 751.256227ms 751.890697ms 751.985879ms 751.999772ms 752.003517ms 752.193543ms 752.535291ms 752.537317ms 752.744748ms 752.835194ms 753.492364ms 753.505054ms 753.559824ms 753.81049ms 754.259332ms 754.274531ms 756.151259ms 756.289441ms 756.309755ms 756.538489ms 756.803729ms 757.131241ms 757.568226ms 758.022823ms 761.401053ms 766.682949ms 774.478849ms 776.347308ms 782.031876ms 782.100187ms 787.47766ms 791.648093ms 792.926806ms 793.849194ms 794.953036ms 795.482158ms 795.527224ms 796.861338ms 797.916475ms 798.66117ms 799.093235ms 799.579964ms 800.526385ms 802.12085ms 802.193031ms 803.579768ms] Jan 13 15:10:19.738: INFO: 50 %ile: 748.519896ms Jan 13 15:10:19.738: INFO: 90 %ile: 774.478849ms Jan 13 15:10:19.738: INFO: 99 %ile: 802.193031ms Jan 13 15:10:19.738: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:19.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-7683" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":32,"skipped":471,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:10:17.890: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:247 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:629 �[1mSTEP�[0m: creating Agnhost RC Jan 13 15:10:17.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4202 create -f -' Jan 13 15:10:19.067: INFO: stderr: "" Jan 13 15:10:19.069: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 13 15:10:20.074: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 15:10:20.074: INFO: Found 1 / 1 Jan 13 15:10:20.075: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 13 15:10:20.080: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 15:10:20.080: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 13 15:10:20.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4202 patch pod agnhost-primary-s9n9m -p {"metadata":{"annotations":{"x":"y"}}}' Jan 13 15:10:20.199: INFO: stderr: "" Jan 13 15:10:20.199: INFO: stdout: "pod/agnhost-primary-s9n9m patched\n" �[1mSTEP�[0m: checking annotations Jan 13 15:10:20.204: INFO: Selector matched 1 pods for map[app:agnhost] Jan 13 15:10:20.204: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 13 15:10:20.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4202" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":33,"skipped":471,"failed":4,"failures":["[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] DNS should provide DNS for ExternalName services [Conformance]","[sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 �[1mSTEP�[0m: Creating a kubernetes client Jan 13 15:10:20.227: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 13 15:10:21.024: INFO: deployment "sample-webhook-deployment" do