Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 1h35m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc000cbbf20>: { error: <*errors.withMessage | 0xc0024a02c0>{ cause: <*errors.errorString | 0xc001a9adb0>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1ad2fea, 0x1b134a8, 0x73c2fa, 0x73bcc5, 0x73b3bb, 0x741149, 0x740b27, 0x761fe5, 0x761d05, 0x761545, 0x7637f2, 0x76f9a5, 0x76f7be, 0x1b2de51, 0x5156c2, 0x46b2c1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-aww79v INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-aww79v" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-4exvhp" using the "upgrades" template (Kubernetes v1.22.8, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-4exvhp --infrastructure (default) --kubernetes-version v1.22.8 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster configmap/cni-k8s-upgrade-and-conformance-4exvhp-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-mp-0-config created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-md-0 created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-md-0 created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-mp-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-control-plane created dockercluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-dmp-0 created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-4exvhp-md-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-aww79v/k8s-upgrade-and-conformance-4exvhp-control-plane to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-aww79v/k8s-upgrade-and-conformance-4exvhp-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Kubernetes control-plane INFO: Patching the new kubernetes version to KCP INFO: Waiting for control-plane machines to have the upgraded kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.5 INFO: Waiting for kube-proxy to have the upgraded kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag STEP: Upgrading the machine deployment INFO: Patching the new kubernetes version to Machine Deployment k8s-upgrade-and-conformance-aww79v/k8s-upgrade-and-conformance-4exvhp-md-0 INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-aww79v/k8s-upgrade-and-conformance-4exvhp-md-0 to be upgraded from v1.22.8 to v1.23.5 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.5 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-aww79v/k8s-upgrade-and-conformance-4exvhp-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-aww79v/k8s-upgrade-and-conformance-4exvhp-mp-0 to be upgraded from v1.22.8 to v1.23.5 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.5 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1650202571�[0m - Will randomize all specs Will run �[1m7042�[0m specs Running in parallel across �[1m4�[0m nodes Apr 17 13:36:14.587: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:36:14.589: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 17 13:36:14.599: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 17 13:36:14.632: INFO: The status of Pod coredns-64897985d-jbxzl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:14.632: INFO: The status of Pod coredns-64897985d-svmtf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:14.632: INFO: The status of Pod kindnet-kzh2l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:14.632: INFO: The status of Pod kindnet-mrwqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:14.632: INFO: The status of Pod kube-proxy-zrgbs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:14.632: INFO: The status of Pod kube-proxy-zzq6m is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:14.632: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 17 13:36:14.632: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 17 13:36:14.632: INFO: POD NODE PHASE GRACE CONDITIONS Apr 17 13:36:14.632: INFO: coredns-64897985d-jbxzl k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:14.632: INFO: coredns-64897985d-svmtf k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:14.632: INFO: kindnet-kzh2l k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:09 +0000 UTC }] Apr 17 13:36:14.632: INFO: kindnet-mrwqq k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:01 +0000 UTC }] Apr 17 13:36:14.632: INFO: kube-proxy-zrgbs k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC }] Apr 17 13:36:14.632: INFO: kube-proxy-zzq6m k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC }] Apr 17 13:36:14.633: INFO: Apr 17 13:36:16.658: INFO: The status of Pod coredns-64897985d-jbxzl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:16.658: INFO: The status of Pod coredns-64897985d-svmtf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:16.658: INFO: The status of Pod kindnet-kzh2l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:16.658: INFO: The status of Pod kindnet-mrwqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:16.658: INFO: The status of Pod kube-proxy-zrgbs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:16.658: INFO: The status of Pod kube-proxy-zzq6m is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:16.658: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Apr 17 13:36:16.658: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 17 13:36:16.658: INFO: POD NODE PHASE GRACE CONDITIONS Apr 17 13:36:16.658: INFO: coredns-64897985d-jbxzl k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:16.658: INFO: coredns-64897985d-svmtf k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:16.658: INFO: kindnet-kzh2l k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:09 +0000 UTC }] Apr 17 13:36:16.658: INFO: kindnet-mrwqq k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:01 +0000 UTC }] Apr 17 13:36:16.658: INFO: kube-proxy-zrgbs k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC }] Apr 17 13:36:16.658: INFO: kube-proxy-zzq6m k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC }] Apr 17 13:36:16.658: INFO: Apr 17 13:36:18.666: INFO: The status of Pod coredns-64897985d-jbxzl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:18.666: INFO: The status of Pod coredns-64897985d-svmtf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:18.666: INFO: The status of Pod kindnet-kzh2l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:18.666: INFO: The status of Pod kindnet-mrwqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:18.666: INFO: The status of Pod kube-proxy-zrgbs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:18.666: INFO: The status of Pod kube-proxy-zzq6m is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:18.666: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (4 seconds elapsed) Apr 17 13:36:18.666: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 17 13:36:18.667: INFO: POD NODE PHASE GRACE CONDITIONS Apr 17 13:36:18.667: INFO: coredns-64897985d-jbxzl k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:18.667: INFO: coredns-64897985d-svmtf k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:18.667: INFO: kindnet-kzh2l k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:09 +0000 UTC }] Apr 17 13:36:18.667: INFO: kindnet-mrwqq k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:01 +0000 UTC }] Apr 17 13:36:18.667: INFO: kube-proxy-zrgbs k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC }] Apr 17 13:36:18.667: INFO: kube-proxy-zzq6m k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC }] Apr 17 13:36:18.667: INFO: Apr 17 13:36:20.656: INFO: The status of Pod coredns-64897985d-jbxzl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:20.656: INFO: The status of Pod coredns-64897985d-svmtf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:20.656: INFO: The status of Pod kindnet-kzh2l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:20.656: INFO: The status of Pod kindnet-mrwqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:20.656: INFO: The status of Pod kube-proxy-zrgbs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:20.656: INFO: The status of Pod kube-proxy-zzq6m is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:20.656: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (6 seconds elapsed) Apr 17 13:36:20.656: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 17 13:36:20.656: INFO: POD NODE PHASE GRACE CONDITIONS Apr 17 13:36:20.656: INFO: coredns-64897985d-jbxzl k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:20.656: INFO: coredns-64897985d-svmtf k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:20.656: INFO: kindnet-kzh2l k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:09 +0000 UTC }] Apr 17 13:36:20.656: INFO: kindnet-mrwqq k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:01 +0000 UTC }] Apr 17 13:36:20.656: INFO: kube-proxy-zrgbs k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC }] Apr 17 13:36:20.656: INFO: kube-proxy-zzq6m k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC }] Apr 17 13:36:20.656: INFO: Apr 17 13:36:22.653: INFO: The status of Pod coredns-64897985d-jbxzl is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:22.653: INFO: The status of Pod coredns-64897985d-svmtf is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:22.653: INFO: The status of Pod kindnet-kzh2l is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:22.653: INFO: The status of Pod kindnet-mrwqq is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:22.653: INFO: The status of Pod kube-proxy-zrgbs is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:22.653: INFO: The status of Pod kube-proxy-zzq6m is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:22.653: INFO: 14 / 20 pods in namespace 'kube-system' are running and ready (8 seconds elapsed) Apr 17 13:36:22.653: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 17 13:36:22.653: INFO: POD NODE PHASE GRACE CONDITIONS Apr 17 13:36:22.653: INFO: coredns-64897985d-jbxzl k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:22.653: INFO: coredns-64897985d-svmtf k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:08 +0000 UTC }] Apr 17 13:36:22.653: INFO: kindnet-kzh2l k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:10 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:17 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:09 +0000 UTC }] Apr 17 13:36:22.653: INFO: kindnet-mrwqq k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:08 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:28:01 +0000 UTC }] Apr 17 13:36:22.653: INFO: kube-proxy-zrgbs k8s-upgrade-and-conformance-4exvhp-worker-ae4lld Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:19 +0000 UTC }] Apr 17 13:36:22.653: INFO: kube-proxy-zzq6m k8s-upgrade-and-conformance-4exvhp-worker-z6rmoz Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:35:19 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:32:24 +0000 UTC }] Apr 17 13:36:22.653: INFO: Apr 17 13:36:24.652: INFO: The status of Pod coredns-64897985d-7bnsg is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:24.652: INFO: The status of Pod coredns-64897985d-mffpk is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Apr 17 13:36:24.652: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (10 seconds elapsed) Apr 17 13:36:24.652: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Apr 17 13:36:24.652: INFO: POD NODE PHASE GRACE CONDITIONS Apr 17 13:36:24.652: INFO: coredns-64897985d-7bnsg k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC }] Apr 17 13:36:24.652: INFO: coredns-64897985d-mffpk k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:24 +0000 UTC }] Apr 17 13:36:24.652: INFO: Apr 17 13:36:26.667: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (12 seconds elapsed) Apr 17 13:36:26.667: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 17 13:36:26.667: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 17 13:36:26.672: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 17 13:36:26.672: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 17 13:36:26.672: INFO: e2e test version: v1.23.5 Apr 17 13:36:26.675: INFO: kube-apiserver version: v1.23.5 Apr 17 13:36:26.676: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:36:26.683: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 17 13:36:26.700: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:36:26.715: INFO: Cluster IP family: ipv4 Apr 17 13:36:26.700: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:36:26.718: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 17 13:36:26.714: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:36:26.731: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:26.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota W0417 13:36:26.777809 13 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 17 13:36:26.777: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:26.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-9121" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:26.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc W0417 13:36:26.758932 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 17 13:36:26.758: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: Gathering metrics Apr 17 13:36:27.370: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-4exvhp-control-plane-ss4pf is Running (Ready = true) Apr 17 13:36:27.515: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:27.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-2858" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":1,"skipped":0,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:27.536: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:27.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-3742" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":2,"skipped":5,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:27.645: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-e822660f-7ea8-40dc-9740-0c97a19eb3c6 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 17 13:36:27.732: INFO: Waiting up to 5m0s for pod "pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b" in namespace "secrets-9287" to be "Succeeded or Failed" Apr 17 13:36:27.736: INFO: Pod "pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.891802ms Apr 17 13:36:29.741: INFO: Pod "pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008439079s Apr 17 13:36:31.843: INFO: Pod "pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110157875s Apr 17 13:36:33.846: INFO: Pod "pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11343699s �[1mSTEP�[0m: Saw pod success Apr 17 13:36:33.846: INFO: Pod "pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b" satisfied condition "Succeeded or Failed" Apr 17 13:36:33.848: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:36:33.873: INFO: Waiting for pod pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b to disappear Apr 17 13:36:33.877: INFO: Pod pod-secrets-83ec05d4-af32-4ec4-8de5-f8981484d14b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:33.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9287" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:26.699: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api W0417 13:36:26.754084 18 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 17 13:36:26.754: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:36:26.769: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca" in namespace "downward-api-4708" to be "Succeeded or Failed" Apr 17 13:36:26.774: INFO: Pod "downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459719ms Apr 17 13:36:28.790: INFO: Pod "downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020873781s Apr 17 13:36:31.015: INFO: Pod "downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca": Phase="Running", Reason="", readiness=true. Elapsed: 4.245979439s Apr 17 13:36:33.032: INFO: Pod "downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca": Phase="Running", Reason="", readiness=true. Elapsed: 6.2625609s Apr 17 13:36:35.035: INFO: Pod "downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.265877899s �[1mSTEP�[0m: Saw pod success Apr 17 13:36:35.035: INFO: Pod "downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca" satisfied condition "Succeeded or Failed" Apr 17 13:36:35.038: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:36:35.060: INFO: Waiting for pod downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca to disappear Apr 17 13:36:35.062: INFO: Pod downwardapi-volume-a7d15bbb-c2ed-4e81-9be5-c67c2c2a65ca no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:35.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4708" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":6,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:33.893: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:36:34.351: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:36:37.369: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:37.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5568" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5568-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":4,"skipped":6,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:37.513: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 17 13:36:38.162: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 17 13:36:38.173: INFO: waiting for watch events with expected annotations Apr 17 13:36:38.173: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:38.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-8332" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":5,"skipped":24,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:38.260: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/discovery.k8s.io �[1mSTEP�[0m: getting /apis/discovery.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 17 13:36:38.323: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 17 13:36:38.327: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 17 13:36:38.341: INFO: waiting for watch events with expected annotations Apr 17 13:36:38.341: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:38.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-4751" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":6,"skipped":51,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:38.390: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating pod Apr 17 13:36:38.428: INFO: The status of Pod pod-hostip-bdb84e5b-6a40-406a-9c20-f4f37d4a6f4a is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:36:40.431: INFO: The status of Pod pod-hostip-bdb84e5b-6a40-406a-9c20-f4f37d4a6f4a is Running (Ready = true) Apr 17 13:36:40.436: INFO: Pod pod-hostip-bdb84e5b-6a40-406a-9c20-f4f37d4a6f4a has hostIP: 172.18.0.4 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:40.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-6338" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":68,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:40.502: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Apr 17 13:36:40.549: INFO: observed Pod pod-test in namespace pods-7860 in phase Pending with labels: map[test-pod-static:true] & conditions [] Apr 17 13:36:40.554: INFO: observed Pod pod-test in namespace pods-7860 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:40 +0000 UTC }] Apr 17 13:36:40.563: INFO: observed Pod pod-test in namespace pods-7860 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:40 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:40 +0000 UTC }] Apr 17 13:36:41.729: INFO: Found Pod pod-test in namespace pods-7860 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-17 13:36:40 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Apr 17 13:36:41.739: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Apr 17 13:36:41.758: INFO: observed event type ADDED Apr 17 13:36:41.758: INFO: observed event type MODIFIED Apr 17 13:36:41.758: INFO: observed event type MODIFIED Apr 17 13:36:41.758: INFO: observed event type MODIFIED Apr 17 13:36:41.758: INFO: observed event type MODIFIED Apr 17 13:36:41.758: INFO: observed event type MODIFIED Apr 17 13:36:41.758: INFO: observed event type MODIFIED Apr 17 13:36:43.733: INFO: observed event type MODIFIED Apr 17 13:36:44.742: INFO: observed event type MODIFIED Apr 17 13:36:44.747: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:44.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7860" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":8,"skipped":107,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:44.761: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:36:44.794: INFO: Creating ReplicaSet my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2 Apr 17 13:36:44.802: INFO: Pod name my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2: Found 0 pods out of 1 Apr 17 13:36:49.805: INFO: Pod name my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2: Found 1 pods out of 1 Apr 17 13:36:49.805: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2" is running Apr 17 13:36:49.807: INFO: Pod "my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2-6txh5" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-17 13:36:44 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-17 13:36:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-17 13:36:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-17 13:36:44 +0000 UTC Reason: Message:}]) Apr 17 13:36:49.807: INFO: Trying to dial the pod Apr 17 13:36:54.818: INFO: Controller my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2: Got expected result from replica 1 [my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2-6txh5]: "my-hostname-basic-c1702f34-173a-4b93-ad50-61be00328fb2-6txh5", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:36:54.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-2" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":9,"skipped":108,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:54.832: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:36:55.694: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:36:58.712: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Setting timeout (1s) shorter than webhook latency (5s) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) �[1mSTEP�[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is longer than webhook latency �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is empty (defaulted to 10s in v1) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:37:10.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9499" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9499-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":10,"skipped":109,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:37:10.882: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test override arguments Apr 17 13:37:10.959: INFO: Waiting up to 5m0s for pod "client-containers-d205c959-8dd1-473b-8b5f-30c2099055d1" in namespace "containers-8734" to be "Succeeded or Failed" Apr 17 13:37:10.963: INFO: Pod "client-containers-d205c959-8dd1-473b-8b5f-30c2099055d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567571ms Apr 17 13:37:12.968: INFO: Pod "client-containers-d205c959-8dd1-473b-8b5f-30c2099055d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008768404s �[1mSTEP�[0m: Saw pod success Apr 17 13:37:12.968: INFO: Pod "client-containers-d205c959-8dd1-473b-8b5f-30c2099055d1" satisfied condition "Succeeded or Failed" Apr 17 13:37:12.970: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod client-containers-d205c959-8dd1-473b-8b5f-30c2099055d1 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:37:12.982: INFO: Waiting for pod client-containers-d205c959-8dd1-473b-8b5f-30c2099055d1 to disappear Apr 17 13:37:12.985: INFO: Pod client-containers-d205c959-8dd1-473b-8b5f-30c2099055d1 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:37:12.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-8734" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":115,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:37:13.048: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:37:13.074: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 17 13:37:16.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7720 --namespace=crd-publish-openapi-7720 create -f -' Apr 17 13:37:17.014: INFO: stderr: "" Apr 17 13:37:17.014: INFO: stdout: "e2e-test-crd-publish-openapi-9186-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 17 13:37:17.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7720 --namespace=crd-publish-openapi-7720 delete e2e-test-crd-publish-openapi-9186-crds test-cr' Apr 17 13:37:17.083: INFO: stderr: "" Apr 17 13:37:17.083: INFO: stdout: "e2e-test-crd-publish-openapi-9186-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 17 13:37:17.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7720 --namespace=crd-publish-openapi-7720 apply -f -' Apr 17 13:37:17.265: INFO: stderr: "" Apr 17 13:37:17.265: INFO: stdout: "e2e-test-crd-publish-openapi-9186-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 17 13:37:17.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7720 --namespace=crd-publish-openapi-7720 delete e2e-test-crd-publish-openapi-9186-crds test-cr' Apr 17 13:37:17.337: INFO: stderr: "" Apr 17 13:37:17.337: INFO: stdout: "e2e-test-crd-publish-openapi-9186-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR without validation schema Apr 17 13:37:17.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7720 explain e2e-test-crd-publish-openapi-9186-crds' Apr 17 13:37:17.510: INFO: stderr: "" Apr 17 13:37:17.510: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9186-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:37:19.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7720" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":12,"skipped":160,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:37:19.679: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 17 13:37:19.712: INFO: Waiting up to 5m0s for pod "downward-api-9df6a6d5-2fcf-4efb-b828-06a53fcf9b60" in namespace "downward-api-8147" to be "Succeeded or Failed" Apr 17 13:37:19.715: INFO: Pod "downward-api-9df6a6d5-2fcf-4efb-b828-06a53fcf9b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295213ms Apr 17 13:37:21.719: INFO: Pod "downward-api-9df6a6d5-2fcf-4efb-b828-06a53fcf9b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006128217s �[1mSTEP�[0m: Saw pod success Apr 17 13:37:21.719: INFO: Pod "downward-api-9df6a6d5-2fcf-4efb-b828-06a53fcf9b60" satisfied condition "Succeeded or Failed" Apr 17 13:37:21.722: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod downward-api-9df6a6d5-2fcf-4efb-b828-06a53fcf9b60 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:37:21.736: INFO: Waiting for pod downward-api-9df6a6d5-2fcf-4efb-b828-06a53fcf9b60 to disappear Apr 17 13:37:21.739: INFO: Pod downward-api-9df6a6d5-2fcf-4efb-b828-06a53fcf9b60 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:37:21.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8147" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":180,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:37:21.837: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3243.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3243.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3243.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3243.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 162.203.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.203.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.203.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.203.162_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3243.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3243.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3243.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3243.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3243.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 162.203.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.203.162_udp@PTR;check="$$(dig +tcp +noall +answer +search 162.203.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.203.162_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 17 13:37:27.917: INFO: Unable to read wheezy_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.920: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.922: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.924: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.938: INFO: Unable to read jessie_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.941: INFO: Unable to read jessie_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.943: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.945: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:27.955: INFO: Lookups using dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9 failed for: [wheezy_udp@dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_udp@dns-test-service.dns-3243.svc.cluster.local jessie_tcp@dns-test-service.dns-3243.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local] Apr 17 13:37:32.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:32.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:32.965: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:32.969: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:32.982: INFO: Unable to read jessie_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:32.985: INFO: Unable to read jessie_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:32.987: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:32.990: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:33.000: INFO: Lookups using dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9 failed for: [wheezy_udp@dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_udp@dns-test-service.dns-3243.svc.cluster.local jessie_tcp@dns-test-service.dns-3243.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local] Apr 17 13:37:37.960: INFO: Unable to read wheezy_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:37.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:37.967: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:37.970: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:37.983: INFO: Unable to read jessie_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:37.986: INFO: Unable to read jessie_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:37.989: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:37.992: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:38.002: INFO: Lookups using dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9 failed for: [wheezy_udp@dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_udp@dns-test-service.dns-3243.svc.cluster.local jessie_tcp@dns-test-service.dns-3243.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local] Apr 17 13:37:42.960: INFO: Unable to read wheezy_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:42.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:42.970: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:42.974: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:42.986: INFO: Unable to read jessie_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:42.988: INFO: Unable to read jessie_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:42.991: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:42.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:43.005: INFO: Lookups using dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9 failed for: [wheezy_udp@dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_udp@dns-test-service.dns-3243.svc.cluster.local jessie_tcp@dns-test-service.dns-3243.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local] Apr 17 13:37:47.961: INFO: Unable to read wheezy_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:47.964: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:47.967: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:47.970: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:47.989: INFO: Unable to read jessie_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:47.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:48.004: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:48.007: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:48.022: INFO: Lookups using dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9 failed for: [wheezy_udp@dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_udp@dns-test-service.dns-3243.svc.cluster.local jessie_tcp@dns-test-service.dns-3243.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local] Apr 17 13:37:52.959: INFO: Unable to read wheezy_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:52.962: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:52.966: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:52.970: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:52.985: INFO: Unable to read jessie_udp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:52.987: INFO: Unable to read jessie_tcp@dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:52.990: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:52.993: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local from pod dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9: the server could not find the requested resource (get pods dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9) Apr 17 13:37:53.004: INFO: Lookups using dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9 failed for: [wheezy_udp@dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@dns-test-service.dns-3243.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_udp@dns-test-service.dns-3243.svc.cluster.local jessie_tcp@dns-test-service.dns-3243.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3243.svc.cluster.local] Apr 17 13:37:58.019: INFO: DNS probes using dns-3243/dns-test-efcf58c3-a8c4-40ae-9214-754387e24ff9 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:37:58.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-3243" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":14,"skipped":249,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:37:58.682: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:37:58.728: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac9bc909-1e0f-48e9-a23b-4a930a9b1268" in namespace "downward-api-4384" to be "Succeeded or Failed" Apr 17 13:37:58.734: INFO: Pod "downwardapi-volume-ac9bc909-1e0f-48e9-a23b-4a930a9b1268": Phase="Pending", Reason="", readiness=false. Elapsed: 4.653214ms Apr 17 13:38:00.738: INFO: Pod "downwardapi-volume-ac9bc909-1e0f-48e9-a23b-4a930a9b1268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008798847s �[1mSTEP�[0m: Saw pod success Apr 17 13:38:00.738: INFO: Pod "downwardapi-volume-ac9bc909-1e0f-48e9-a23b-4a930a9b1268" satisfied condition "Succeeded or Failed" Apr 17 13:38:00.741: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod downwardapi-volume-ac9bc909-1e0f-48e9-a23b-4a930a9b1268 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:38:00.755: INFO: Waiting for pod downwardapi-volume-ac9bc909-1e0f-48e9-a23b-4a930a9b1268 to disappear Apr 17 13:38:00.758: INFO: Pod downwardapi-volume-ac9bc909-1e0f-48e9-a23b-4a930a9b1268 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:38:00.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4384" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":251,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:38:00.773: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status captures configMap creation �[1mSTEP�[0m: Deleting a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:38:28.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-2025" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":16,"skipped":253,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:26.882: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-305 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-305 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-305 I0417 13:36:26.932198 13 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-305, replica count: 3 I0417 13:36:29.984964 13 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0417 13:36:32.985329 13 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:36:33.051: INFO: Creating new exec pod Apr 17 13:36:36.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:38.242: INFO: rc: 1 Apr 17 13:36:38.242: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:36:39.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:41.379: INFO: rc: 1 Apr 17 13:36:41.379: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:36:42.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:44.391: INFO: rc: 1 Apr 17 13:36:44.391: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:36:45.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:47.377: INFO: rc: 1 Apr 17 13:36:47.377: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:36:48.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:50.386: INFO: rc: 1 Apr 17 13:36:50.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:36:51.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:53.384: INFO: rc: 1 Apr 17 13:36:53.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:36:54.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:56.399: INFO: rc: 1 Apr 17 13:36:56.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:36:57.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:36:59.399: INFO: rc: 1 Apr 17 13:36:59.399: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:00.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:02.376: INFO: rc: 1 Apr 17 13:37:02.376: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:03.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:05.390: INFO: rc: 1 Apr 17 13:37:05.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:06.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:08.406: INFO: rc: 1 Apr 17 13:37:08.406: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:09.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:11.393: INFO: rc: 1 Apr 17 13:37:11.393: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:12.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:14.387: INFO: rc: 1 Apr 17 13:37:14.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:15.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:17.394: INFO: rc: 1 Apr 17 13:37:17.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:18.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:20.404: INFO: rc: 1 Apr 17 13:37:20.404: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:21.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:23.392: INFO: rc: 1 Apr 17 13:37:23.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:24.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:26.408: INFO: rc: 1 Apr 17 13:37:26.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:27.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:29.398: INFO: rc: 1 Apr 17 13:37:29.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:30.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:32.398: INFO: rc: 1 Apr 17 13:37:32.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:33.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:35.385: INFO: rc: 1 Apr 17 13:37:35.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:36.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:38.402: INFO: rc: 1 Apr 17 13:37:38.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:39.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:41.383: INFO: rc: 1 Apr 17 13:37:41.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:42.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:44.399: INFO: rc: 1 Apr 17 13:37:44.400: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:45.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:47.402: INFO: rc: 1 Apr 17 13:37:47.402: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:48.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:50.397: INFO: rc: 1 Apr 17 13:37:50.398: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:51.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:53.401: INFO: rc: 1 Apr 17 13:37:53.401: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:54.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:56.384: INFO: rc: 1 Apr 17 13:37:56.384: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:37:57.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:37:59.394: INFO: rc: 1 Apr 17 13:37:59.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:00.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:02.390: INFO: rc: 1 Apr 17 13:38:02.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:03.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:05.386: INFO: rc: 1 Apr 17 13:38:05.386: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:06.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:08.392: INFO: rc: 1 Apr 17 13:38:08.392: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:09.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:11.410: INFO: rc: 1 Apr 17 13:38:11.410: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -v -t -w 2 affinity-clusterip-transition 80 + echo hostName nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:12.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:14.395: INFO: rc: 1 Apr 17 13:38:14.395: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:15.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:17.390: INFO: rc: 1 Apr 17 13:38:17.390: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:18.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:20.407: INFO: rc: 1 Apr 17 13:38:20.408: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:21.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:23.383: INFO: rc: 1 Apr 17 13:38:23.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:24.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:26.383: INFO: rc: 1 Apr 17 13:38:26.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:27.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:29.383: INFO: rc: 1 Apr 17 13:38:29.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:30.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:32.388: INFO: rc: 1 Apr 17 13:38:32.388: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:33.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:35.383: INFO: rc: 1 Apr 17 13:38:35.383: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:36.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:38.394: INFO: rc: 1 Apr 17 13:38:38.394: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:38.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:40.580: INFO: rc: 1 Apr 17 13:38:40.580: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-305 exec execpod-affinity86pw6 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:40.580: FAIL: Unexpected error: <*errors.errorString | 0xc001ae6260>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x6f41d04, {0x78eb710, 0xc000a3e600}, 0xc0007d9b80, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2959 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2910 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1838 +0x90 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0000c76c0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Apr 17 13:38:40.581: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-305, will wait for the garbage collector to delete the pods Apr 17 13:38:40.660: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.608328ms Apr 17 13:38:40.761: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 101.289778ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:38:43.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-305" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[91m�[1m• Failure [136.208 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:38:40.580: Unexpected error: <*errors.errorString | 0xc001ae6260>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2959 �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":0} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:35.071: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-694 Apr 17 13:36:35.110: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:36:37.114: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 17 13:36:37.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 17 13:36:37.290: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 17 13:36:37.290: INFO: stdout: "iptables" Apr 17 13:36:37.290: INFO: proxyMode: iptables Apr 17 13:36:37.299: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 17 13:36:37.302: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-694 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-694 I0417 13:36:37.317269 18 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-694, replica count: 3 I0417 13:36:40.368082 18 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:36:40.378: INFO: Creating new exec pod Apr 17 13:36:43.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:36:45.552: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:36:45.552: INFO: stdout: "" Apr 17 13:36:46.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:36:48.712: INFO: stderr: "+ + nc -v -techo -w hostName 2\n affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:36:48.712: INFO: stdout: "" Apr 17 13:36:49.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:36:51.688: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:36:51.688: INFO: stdout: "" Apr 17 13:36:52.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:36:54.703: INFO: stderr: "+ + nc -v -t -w 2 affinity-nodeport-timeout 80\necho hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:36:54.703: INFO: stdout: "" Apr 17 13:36:55.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:36:57.692: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:36:57.692: INFO: stdout: "" Apr 17 13:36:58.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:00.720: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:00.720: INFO: stdout: "" Apr 17 13:37:01.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:03.702: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:03.702: INFO: stdout: "" Apr 17 13:37:04.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:06.693: INFO: stderr: "+ + nc -v -t -w 2 affinity-nodeport-timeout 80\necho hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:06.693: INFO: stdout: "" Apr 17 13:37:07.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:09.695: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:09.695: INFO: stdout: "" Apr 17 13:37:10.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:12.696: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:12.696: INFO: stdout: "" Apr 17 13:37:13.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:15.699: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:15.699: INFO: stdout: "" Apr 17 13:37:16.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:18.699: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:18.699: INFO: stdout: "" Apr 17 13:37:19.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:21.703: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:21.703: INFO: stdout: "" Apr 17 13:37:22.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:24.718: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:24.718: INFO: stdout: "" Apr 17 13:37:25.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:27.715: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:27.715: INFO: stdout: "" Apr 17 13:37:28.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:30.692: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:30.692: INFO: stdout: "" Apr 17 13:37:31.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:33.697: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:33.697: INFO: stdout: "" Apr 17 13:37:34.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:36.714: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:36.714: INFO: stdout: "" Apr 17 13:37:37.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:39.709: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:39.709: INFO: stdout: "" Apr 17 13:37:40.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:42.702: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:42.702: INFO: stdout: "" Apr 17 13:37:43.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:45.707: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:45.707: INFO: stdout: "" Apr 17 13:37:46.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:48.724: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:48.724: INFO: stdout: "" Apr 17 13:37:49.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:51.691: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:51.691: INFO: stdout: "" Apr 17 13:37:52.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:54.711: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:54.711: INFO: stdout: "" Apr 17 13:37:55.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:37:57.710: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:37:57.710: INFO: stdout: "" Apr 17 13:37:58.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:00.759: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:00.759: INFO: stdout: "" Apr 17 13:38:01.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:03.691: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:03.691: INFO: stdout: "" Apr 17 13:38:04.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:06.708: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:06.708: INFO: stdout: "" Apr 17 13:38:07.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:09.708: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:09.708: INFO: stdout: "" Apr 17 13:38:10.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:12.700: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:12.700: INFO: stdout: "" Apr 17 13:38:13.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:15.701: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:15.701: INFO: stdout: "" Apr 17 13:38:16.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:18.698: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:18.701: INFO: stdout: "" Apr 17 13:38:19.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:21.701: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:21.701: INFO: stdout: "" Apr 17 13:38:22.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:24.709: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:24.709: INFO: stdout: "" Apr 17 13:38:25.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:27.697: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:27.697: INFO: stdout: "" Apr 17 13:38:28.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:30.692: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:30.692: INFO: stdout: "" Apr 17 13:38:31.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:33.693: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:33.693: INFO: stdout: "" Apr 17 13:38:34.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:36.714: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:36.714: INFO: stdout: "" Apr 17 13:38:37.552: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:39.698: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:39.698: INFO: stdout: "" Apr 17 13:38:40.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:42.701: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:42.701: INFO: stdout: "" Apr 17 13:38:43.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:45.736: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-timeout 80\n+ echo hostName\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:45.736: INFO: stdout: "" Apr 17 13:38:45.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-694 exec execpod-affinityv4pfs -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:38:47.895: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 17 13:38:47.895: INFO: stdout: "" Apr 17 13:38:47.896: FAIL: Unexpected error: <*errors.errorString | 0xc002dac3c0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc000d43a20, {0x78eb710, 0xc000d15e00}, 0xc000c55400) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2876 +0x7cf k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1870 +0x8b k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000cf8b60, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Apr 17 13:38:47.896: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-694, will wait for the garbage collector to delete the pods Apr 17 13:38:47.972: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.93128ms Apr 17 13:38:48.074: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 101.173867ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:38:50.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-694" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[91m�[1m• Failure [135.233 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:38:47.896: Unexpected error: <*errors.errorString | 0xc002dac3c0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2876 �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:38:28.949: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Apr 17 13:38:30.994: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-2638 PodName:var-expansion-c789eb71-c683-4df8-9e57-cbb642766d85 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:38:30.994: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:38:30.995: INFO: ExecWithOptions: Clientset creation Apr 17 13:38:30.995: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-2638/pods/var-expansion-c789eb71-c683-4df8-9e57-cbb642766d85/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: test for file in mounted path Apr 17 13:38:31.069: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-2638 PodName:var-expansion-c789eb71-c683-4df8-9e57-cbb642766d85 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:38:31.069: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:38:31.070: INFO: ExecWithOptions: Clientset creation Apr 17 13:38:31.070: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-2638/pods/var-expansion-c789eb71-c683-4df8-9e57-cbb642766d85/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: updating the annotation value Apr 17 13:38:31.658: INFO: Successfully updated pod "var-expansion-c789eb71-c683-4df8-9e57-cbb642766d85" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Apr 17 13:38:31.661: INFO: Deleting pod "var-expansion-c789eb71-c683-4df8-9e57-cbb642766d85" in namespace "var-expansion-2638" Apr 17 13:38:31.665: INFO: Wait up to 5m0s for pod "var-expansion-c789eb71-c683-4df8-9e57-cbb642766d85" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:39:05.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-2638" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":17,"skipped":325,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:39:05.734: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a ResourceQuota with best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a best-effort pod �[1mSTEP�[0m: Ensuring resource quota with best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not best effort ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a not best-effort pod �[1mSTEP�[0m: Ensuring resource quota with not best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with best effort scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:39:21.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-209" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":18,"skipped":358,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:39:21.864: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-map-07dc4fdd-e86c-4350-8110-c47ba277eadc �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 17 13:39:21.903: INFO: Waiting up to 5m0s for pod "pod-secrets-fd2a208a-8052-476c-87ed-ab7171778720" in namespace "secrets-3498" to be "Succeeded or Failed" Apr 17 13:39:21.905: INFO: Pod "pod-secrets-fd2a208a-8052-476c-87ed-ab7171778720": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150271ms Apr 17 13:39:23.909: INFO: Pod "pod-secrets-fd2a208a-8052-476c-87ed-ab7171778720": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005845083s �[1mSTEP�[0m: Saw pod success Apr 17 13:39:23.909: INFO: Pod "pod-secrets-fd2a208a-8052-476c-87ed-ab7171778720" satisfied condition "Succeeded or Failed" Apr 17 13:39:23.912: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod pod-secrets-fd2a208a-8052-476c-87ed-ab7171778720 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:39:23.925: INFO: Waiting for pod pod-secrets-fd2a208a-8052-476c-87ed-ab7171778720 to disappear Apr 17 13:39:23.927: INFO: Pod pod-secrets-fd2a208a-8052-476c-87ed-ab7171778720 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:39:23.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-3498" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":367,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:39:23.964: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create a ReplicaSet �[1mSTEP�[0m: Verify that the required pods have come up Apr 17 13:39:24.003: INFO: Pod name sample-pod: Found 0 pods out of 3 Apr 17 13:39:29.007: INFO: Pod name sample-pod: Found 3 pods out of 3 �[1mSTEP�[0m: ensuring each pod is running Apr 17 13:39:31.017: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} �[1mSTEP�[0m: Listing all ReplicaSets �[1mSTEP�[0m: DeleteCollection of the ReplicaSets �[1mSTEP�[0m: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:39:31.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-5293" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":20,"skipped":389,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:39:31.039: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 17 13:39:31.080: INFO: Waiting up to 5m0s for pod "downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6" in namespace "downward-api-2713" to be "Succeeded or Failed" Apr 17 13:39:31.083: INFO: Pod "downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.568432ms Apr 17 13:39:33.088: INFO: Pod "downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007024226s Apr 17 13:39:35.091: INFO: Pod "downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010531125s �[1mSTEP�[0m: Saw pod success Apr 17 13:39:35.091: INFO: Pod "downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6" satisfied condition "Succeeded or Failed" Apr 17 13:39:35.094: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:39:35.115: INFO: Waiting for pod downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6 to disappear Apr 17 13:39:35.118: INFO: Pod downward-api-c95bf7fd-8de5-46cd-a806-492e3f5c9de6 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:39:35.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2713" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":390,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:39:35.158: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:39:35.195: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f8902aa-130a-4ef5-bced-cdca7f157380" in namespace "projected-8385" to be "Succeeded or Failed" Apr 17 13:39:35.197: INFO: Pod "downwardapi-volume-9f8902aa-130a-4ef5-bced-cdca7f157380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236695ms Apr 17 13:39:37.201: INFO: Pod "downwardapi-volume-9f8902aa-130a-4ef5-bced-cdca7f157380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006355212s �[1mSTEP�[0m: Saw pod success Apr 17 13:39:37.201: INFO: Pod "downwardapi-volume-9f8902aa-130a-4ef5-bced-cdca7f157380" satisfied condition "Succeeded or Failed" Apr 17 13:39:37.203: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod downwardapi-volume-9f8902aa-130a-4ef5-bced-cdca7f157380 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:39:37.226: INFO: Waiting for pod downwardapi-volume-9f8902aa-130a-4ef5-bced-cdca7f157380 to disappear Apr 17 13:39:37.228: INFO: Pod downwardapi-volume-9f8902aa-130a-4ef5-bced-cdca7f157380 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:39:37.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8385" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":413,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:39:37.239: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:39:37.661: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:39:40.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:39:40.689: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-7837-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:39:43.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8585" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":23,"skipped":415,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":17,"failed":1,"failures":["[sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:38:43.093: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-7138 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-7138 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-7138 I0417 13:38:43.136088 13 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-7138, replica count: 3 I0417 13:38:46.188855 13 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:38:46.194: INFO: Creating new exec pod Apr 17 13:38:49.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:51.364: INFO: rc: 1 Apr 17 13:38:51.364: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:52.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:54.533: INFO: rc: 1 Apr 17 13:38:54.533: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:55.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:38:57.531: INFO: rc: 1 Apr 17 13:38:57.531: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:38:58.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:00.520: INFO: rc: 1 Apr 17 13:39:00.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:01.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:03.510: INFO: rc: 1 Apr 17 13:39:03.510: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo+ hostNamenc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:04.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:06.525: INFO: rc: 1 Apr 17 13:39:06.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:07.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:09.526: INFO: rc: 1 Apr 17 13:39:09.526: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:10.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:12.524: INFO: rc: 1 Apr 17 13:39:12.524: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo+ nc -v -t -w 2 affinity-clusterip-transition 80 hostName nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:13.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:15.506: INFO: rc: 1 Apr 17 13:39:15.506: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:16.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:18.544: INFO: rc: 1 Apr 17 13:39:18.544: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:19.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:21.519: INFO: rc: 1 Apr 17 13:39:21.519: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:22.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:24.556: INFO: rc: 1 Apr 17 13:39:24.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:25.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:27.606: INFO: rc: 1 Apr 17 13:39:27.606: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:28.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:30.556: INFO: rc: 1 Apr 17 13:39:30.556: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + + echonc hostName -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:31.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:33.514: INFO: rc: 1 Apr 17 13:39:33.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:34.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:36.514: INFO: rc: 1 Apr 17 13:39:36.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:37.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:39.530: INFO: rc: 1 Apr 17 13:39:39.530: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:40.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:42.542: INFO: rc: 1 Apr 17 13:39:42.542: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:43.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:45.516: INFO: rc: 1 Apr 17 13:39:45.516: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:46.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:48.531: INFO: rc: 1 Apr 17 13:39:48.531: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:49.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:51.507: INFO: rc: 1 Apr 17 13:39:51.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:52.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:54.526: INFO: rc: 1 Apr 17 13:39:54.526: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -v -t -w 2 affinity-clusterip-transition 80 + echo hostName nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:55.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:39:57.523: INFO: rc: 1 Apr 17 13:39:57.523: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:58.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:00.508: INFO: rc: 1 Apr 17 13:40:00.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:01.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:03.507: INFO: rc: 1 Apr 17 13:40:03.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:04.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:06.532: INFO: rc: 1 Apr 17 13:40:06.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:07.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:09.520: INFO: rc: 1 Apr 17 13:40:09.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:10.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:12.521: INFO: rc: 1 Apr 17 13:40:12.521: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:13.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:15.507: INFO: rc: 1 Apr 17 13:40:15.507: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:16.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:18.525: INFO: rc: 1 Apr 17 13:40:18.525: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:19.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:21.517: INFO: rc: 1 Apr 17 13:40:21.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:22.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:24.535: INFO: rc: 1 Apr 17 13:40:24.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:25.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:27.532: INFO: rc: 1 Apr 17 13:40:27.532: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:28.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:30.535: INFO: rc: 1 Apr 17 13:40:30.535: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:31.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:33.508: INFO: rc: 1 Apr 17 13:40:33.508: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:34.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:36.517: INFO: rc: 1 Apr 17 13:40:36.517: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:37.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:39.512: INFO: rc: 1 Apr 17 13:40:39.512: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:40.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:42.520: INFO: rc: 1 Apr 17 13:40:42.520: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:43.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:45.504: INFO: rc: 1 Apr 17 13:40:45.504: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:46.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:48.515: INFO: rc: 1 Apr 17 13:40:48.515: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:49.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:51.514: INFO: rc: 1 Apr 17 13:40:51.514: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip-transition 80 nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:51.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Apr 17 13:40:53.646: INFO: rc: 1 Apr 17 13:40:53.646: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7138 exec execpod-affinityznrzb -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80: Command stdout: stderr: + nc -v -t -w 2+ affinity-clusterip-transition 80 echo hostName nc: connect to affinity-clusterip-transition port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:53.647: FAIL: Unexpected error: <*errors.errorString | 0xc002c5c370>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x6f41d04, {0x78eb710, 0xc000a3e900}, 0xc00054a780, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2959 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2910 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1838 +0x90 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0000c76c0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Apr 17 13:40:53.647: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-7138, will wait for the garbage collector to delete the pods Apr 17 13:40:53.730: INFO: Deleting ReplicationController affinity-clusterip-transition took: 4.524923ms Apr 17 13:40:53.831: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.930812ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:40:55.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7138" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[91m�[1m• Failure [132.560 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:40:53.647: Unexpected error: <*errors.errorString | 0xc002c5c370>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2959 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":1,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:38:50.306: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-5075 Apr 17 13:38:50.344: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:38:52.347: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 17 13:38:52.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 17 13:38:52.531: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 17 13:38:52.531: INFO: stdout: "iptables" Apr 17 13:38:52.531: INFO: proxyMode: iptables Apr 17 13:38:52.542: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 17 13:38:52.545: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-5075 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-5075 I0417 13:38:52.566916 18 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-5075, replica count: 3 I0417 13:38:55.617851 18 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:38:55.626: INFO: Creating new exec pod Apr 17 13:38:58.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:00.788: INFO: rc: 1 Apr 17 13:39:00.788: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:01.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:03.921: INFO: rc: 1 Apr 17 13:39:03.921: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:04.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:06.924: INFO: rc: 1 Apr 17 13:39:06.925: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:07.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:09.947: INFO: rc: 1 Apr 17 13:39:09.948: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:10.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:12.951: INFO: rc: 1 Apr 17 13:39:12.951: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:13.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:15.927: INFO: rc: 1 Apr 17 13:39:15.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:16.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:18.932: INFO: rc: 1 Apr 17 13:39:18.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:19.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:21.936: INFO: rc: 1 Apr 17 13:39:21.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:22.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:24.946: INFO: rc: 1 Apr 17 13:39:24.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + + ncecho -v hostName -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:25.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:27.979: INFO: rc: 1 Apr 17 13:39:27.979: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:28.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:30.940: INFO: rc: 1 Apr 17 13:39:30.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:31.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:33.928: INFO: rc: 1 Apr 17 13:39:33.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:34.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:36.928: INFO: rc: 1 Apr 17 13:39:36.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:37.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:39.928: INFO: rc: 1 Apr 17 13:39:39.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:40.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:42.928: INFO: rc: 1 Apr 17 13:39:42.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:43.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:45.930: INFO: rc: 1 Apr 17 13:39:45.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:46.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:48.930: INFO: rc: 1 Apr 17 13:39:48.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:49.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:51.921: INFO: rc: 1 Apr 17 13:39:51.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:52.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:54.935: INFO: rc: 1 Apr 17 13:39:54.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:55.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:39:57.956: INFO: rc: 1 Apr 17 13:39:57.956: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + nc -v -t -w 2 affinity-nodeport-timeout 80 + echo hostName nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:39:58.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:00.958: INFO: rc: 1 Apr 17 13:40:00.958: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:01.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:03.929: INFO: rc: 1 Apr 17 13:40:03.930: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:04.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:06.931: INFO: rc: 1 Apr 17 13:40:06.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:07.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:09.933: INFO: rc: 1 Apr 17 13:40:09.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:10.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:12.934: INFO: rc: 1 Apr 17 13:40:12.934: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:13.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:15.926: INFO: rc: 1 Apr 17 13:40:15.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:16.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:18.937: INFO: rc: 1 Apr 17 13:40:18.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:19.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:21.936: INFO: rc: 1 Apr 17 13:40:21.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:22.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:24.940: INFO: rc: 1 Apr 17 13:40:24.940: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:25.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:27.944: INFO: rc: 1 Apr 17 13:40:27.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:28.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:30.928: INFO: rc: 1 Apr 17 13:40:30.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:31.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:33.928: INFO: rc: 1 Apr 17 13:40:33.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:34.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:36.923: INFO: rc: 1 Apr 17 13:40:36.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:37.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:39.935: INFO: rc: 1 Apr 17 13:40:39.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:40.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:42.937: INFO: rc: 1 Apr 17 13:40:42.937: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:43.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:45.933: INFO: rc: 1 Apr 17 13:40:45.933: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo+ hostName nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:46.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:48.944: INFO: rc: 1 Apr 17 13:40:48.944: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:49.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:51.932: INFO: rc: 1 Apr 17 13:40:51.932: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:52.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:54.965: INFO: rc: 1 Apr 17 13:40:54.965: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:55.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:40:57.945: INFO: rc: 1 Apr 17 13:40:57.945: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:40:58.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:00.939: INFO: rc: 1 Apr 17 13:41:00.939: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:00.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:03.078: INFO: rc: 1 Apr 17 13:41:03.078: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5075 exec execpod-affinity4rnqp -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:03.079: FAIL: Unexpected error: <*errors.errorString | 0xc003e72570>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc000d43a20, {0x78eb710, 0xc0045e0780}, 0xc000c55180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2876 +0x7cf k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1870 +0x8b k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000cf8b60, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Apr 17 13:41:03.079: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-5075, will wait for the garbage collector to delete the pods Apr 17 13:41:03.157: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.002588ms Apr 17 13:41:03.258: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.83192ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:41:05.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5075" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[91m�[1m• Failure [135.182 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:41:03.079: Unexpected error: <*errors.errorString | 0xc003e72570>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2876 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":2,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:41:05.492: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-8489 Apr 17 13:41:05.528: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:41:07.532: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 17 13:41:07.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 17 13:41:07.677: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 17 13:41:07.677: INFO: stdout: "iptables" Apr 17 13:41:07.677: INFO: proxyMode: iptables Apr 17 13:41:07.684: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 17 13:41:07.687: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-8489 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-8489 I0417 13:41:07.708510 18 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8489, replica count: 3 I0417 13:41:10.761618 18 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:41:10.770: INFO: Creating new exec pod Apr 17 13:41:13.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:15.947: INFO: rc: 1 Apr 17 13:41:15.947: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:16.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:19.084: INFO: rc: 1 Apr 17 13:41:19.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:19.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:22.087: INFO: rc: 1 Apr 17 13:41:22.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:22.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:25.079: INFO: rc: 1 Apr 17 13:41:25.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:25.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:28.095: INFO: rc: 1 Apr 17 13:41:28.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:28.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:31.079: INFO: rc: 1 Apr 17 13:41:31.079: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:31.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:34.093: INFO: rc: 1 Apr 17 13:41:34.093: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:34.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:37.095: INFO: rc: 1 Apr 17 13:41:37.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:37.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:40.096: INFO: rc: 1 Apr 17 13:41:40.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:40.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:43.095: INFO: rc: 1 Apr 17 13:41:43.095: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:43.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:46.099: INFO: rc: 1 Apr 17 13:41:46.099: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:46.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:49.096: INFO: rc: 1 Apr 17 13:41:49.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:49.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:52.084: INFO: rc: 1 Apr 17 13:41:52.084: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:52.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:55.080: INFO: rc: 1 Apr 17 13:41:55.080: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:55.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:41:58.089: INFO: rc: 1 Apr 17 13:41:58.089: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:41:58.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:01.087: INFO: rc: 1 Apr 17 13:42:01.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:01.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:04.103: INFO: rc: 1 Apr 17 13:42:04.104: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:04.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:07.088: INFO: rc: 1 Apr 17 13:42:07.088: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:07.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:10.099: INFO: rc: 1 Apr 17 13:42:10.099: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:10.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:13.101: INFO: rc: 1 Apr 17 13:42:13.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:13.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:16.087: INFO: rc: 1 Apr 17 13:42:16.087: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:16.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:19.076: INFO: rc: 1 Apr 17 13:42:19.076: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:19.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:22.098: INFO: rc: 1 Apr 17 13:42:22.098: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:22.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:25.255: INFO: rc: 1 Apr 17 13:42:25.256: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:25.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:28.157: INFO: rc: 1 Apr 17 13:42:28.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:28.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:31.271: INFO: rc: 1 Apr 17 13:42:31.271: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:31.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:34.153: INFO: rc: 1 Apr 17 13:42:34.153: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:34.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:37.109: INFO: rc: 1 Apr 17 13:42:37.109: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:37.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:40.121: INFO: rc: 1 Apr 17 13:42:40.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:40.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:43.144: INFO: rc: 1 Apr 17 13:42:43.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:43.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:46.117: INFO: rc: 1 Apr 17 13:42:46.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:46.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:49.119: INFO: rc: 1 Apr 17 13:42:49.119: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:49.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:52.165: INFO: rc: 1 Apr 17 13:42:52.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:52.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:55.221: INFO: rc: 1 Apr 17 13:42:55.221: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:55.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:42:58.243: INFO: rc: 1 Apr 17 13:42:58.243: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:42:58.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:43:01.189: INFO: rc: 1 Apr 17 13:43:01.189: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:43:01.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:43:04.386: INFO: rc: 1 Apr 17 13:43:04.387: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:43:04.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:43:07.437: INFO: rc: 1 Apr 17 13:43:07.439: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:43:07.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:43:10.190: INFO: rc: 1 Apr 17 13:43:10.190: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:43:10.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:43:13.218: INFO: rc: 1 Apr 17 13:43:13.219: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:43:13.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:43:16.156: INFO: rc: 1 Apr 17 13:43:16.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:43:16.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 17 13:43:18.385: INFO: rc: 1 Apr 17 13:43:18.385: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8489 exec execpod-affinity98fj9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-nodeport-timeout 80 nc: connect to affinity-nodeport-timeout port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Apr 17 13:43:18.386: FAIL: Unexpected error: <*errors.errorString | 0xc002b16760>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForSessionAffinityTimeout(0xc000d43a20, {0x78eb710, 0xc000ac8c00}, 0xc00088bb80) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2876 +0x7cf k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1870 +0x8b k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000cf8b60, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Apr 17 13:43:18.387: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-8489, will wait for the garbage collector to delete the pods Apr 17 13:43:18.482: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 6.244693ms Apr 17 13:43:18.582: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.407457ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:43:27.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8489" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[91m�[1m• Failure [141.900 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:43:18.386: Unexpected error: <*errors.errorString | 0xc002b16760>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-timeout:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2876 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":1,"skipped":4,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:39:44.029: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod liveness-7ba596e9-979f-4daa-956d-7a60d8c62e41 in namespace container-probe-5366 Apr 17 13:39:46.082: INFO: Started pod liveness-7ba596e9-979f-4daa-956d-7a60d8c62e41 in namespace container-probe-5366 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 17 13:39:46.084: INFO: Initial restart count of pod liveness-7ba596e9-979f-4daa-956d-7a60d8c62e41 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:43:46.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5366" for this suite. �[32m• [SLOW TEST:242.680 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":457,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:36:26.758: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl W0417 13:36:26.803442 15 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 17 13:36:26.803: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a replication controller Apr 17 13:36:26.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 create -f -' Apr 17 13:36:28.517: INFO: stderr: "" Apr 17 13:36:28.517: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 17 13:36:28.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:36:28.657: INFO: stderr: "" Apr 17 13:36:28.657: INFO: stdout: "update-demo-nautilus-25mhk update-demo-nautilus-g47k2 " Apr 17 13:36:28.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods update-demo-nautilus-25mhk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:36:28.815: INFO: stderr: "" Apr 17 13:36:28.815: INFO: stdout: "" Apr 17 13:36:28.815: INFO: update-demo-nautilus-25mhk is created but not running Apr 17 13:36:33.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:36:33.896: INFO: stderr: "" Apr 17 13:36:33.896: INFO: stdout: "update-demo-nautilus-25mhk update-demo-nautilus-g47k2 " Apr 17 13:36:33.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods update-demo-nautilus-25mhk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:36:33.977: INFO: stderr: "" Apr 17 13:36:33.978: INFO: stdout: "true" Apr 17 13:36:33.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods update-demo-nautilus-25mhk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:36:34.043: INFO: stderr: "" Apr 17 13:36:34.043: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:36:34.043: INFO: validating pod update-demo-nautilus-25mhk Apr 17 13:40:07.674: INFO: update-demo-nautilus-25mhk is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-25mhk) Apr 17 13:40:12.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:40:12.755: INFO: stderr: "" Apr 17 13:40:12.755: INFO: stdout: "update-demo-nautilus-25mhk update-demo-nautilus-g47k2 " Apr 17 13:40:12.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods update-demo-nautilus-25mhk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:40:12.822: INFO: stderr: "" Apr 17 13:40:12.822: INFO: stdout: "true" Apr 17 13:40:12.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods update-demo-nautilus-25mhk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:40:12.895: INFO: stderr: "" Apr 17 13:40:12.895: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:40:12.895: INFO: validating pod update-demo-nautilus-25mhk Apr 17 13:43:46.810: INFO: update-demo-nautilus-25mhk is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-25mhk) Apr 17 13:43:51.811: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 +0x225 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0007bc9c0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Apr 17 13:43:51.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 delete --grace-period=0 --force -f -' Apr 17 13:43:51.902: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 13:43:51.902: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 17 13:43:51.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get rc,svc -l name=update-demo --no-headers' Apr 17 13:43:51.987: INFO: stderr: "No resources found in kubectl-7375 namespace.\n" Apr 17 13:43:51.987: INFO: stdout: "" Apr 17 13:43:51.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7375 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 13:43:52.071: INFO: stderr: "" Apr 17 13:43:52.071: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:43:52.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7375" for this suite. �[91m�[1m• Failure [445.322 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould create and stop a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:43:51.812: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:43:46.732: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a watch on configmaps with a certain label �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: changing the label value of the configmap �[1mSTEP�[0m: Expecting to observe a delete notification for the watched object Apr 17 13:43:46.777: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 d402e70b-a2bf-4d2a-9030-9b910a8c4dcc 4740 0 2022-04-17 13:43:46 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-17 13:43:46 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:43:46.777: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 d402e70b-a2bf-4d2a-9030-9b910a8c4dcc 4741 0 2022-04-17 13:43:46 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-17 13:43:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:43:46.778: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 d402e70b-a2bf-4d2a-9030-9b910a8c4dcc 4742 0 2022-04-17 13:43:46 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-17 13:43:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: Expecting not to observe a notification because the object no longer meets the selector's requirements �[1mSTEP�[0m: changing the label value of the configmap back �[1mSTEP�[0m: modifying the configmap a third time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe an add notification for the watched object when the label value was restored Apr 17 13:43:56.823: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 d402e70b-a2bf-4d2a-9030-9b910a8c4dcc 4811 0 2022-04-17 13:43:46 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-17 13:43:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:43:56.823: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 d402e70b-a2bf-4d2a-9030-9b910a8c4dcc 4812 0 2022-04-17 13:43:46 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-17 13:43:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:43:56.823: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7481 d402e70b-a2bf-4d2a-9030-9b910a8c4dcc 4813 0 2022-04-17 13:43:46 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2022-04-17 13:43:46 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:43:56.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-7481" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":25,"skipped":473,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:43:56.853: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:43:57.405: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:44:00.428: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Apr 17 13:44:10.451: INFO: Waiting for webhook configuration to be ready... Apr 17 13:44:20.562: INFO: Waiting for webhook configuration to be ready... Apr 17 13:44:30.671: INFO: Waiting for webhook configuration to be ready... Apr 17 13:44:40.762: INFO: Waiting for webhook configuration to be ready... Apr 17 13:44:50.772: INFO: Waiting for webhook configuration to be ready... Apr 17 13:44:50.772: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForAttachingPod(0xc0009d91e0, {0xc005151d50, 0xc}, 0xc003097310, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:207 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000232d00, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:44:50.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3546" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3546-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.991 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny attaching pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:44:50.772: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:43:27.414: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: Ensuring more than one job is running at a time �[1mSTEP�[0m: Ensuring at least two running jobs exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:45:01.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-8045" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":2,"skipped":9,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:45:01.504: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Apr 17 13:45:01.544: INFO: Waiting up to 5m0s for pod "pod-8c392b31-f4c2-4df1-a9e5-c31eef5c67b2" in namespace "emptydir-2857" to be "Succeeded or Failed" Apr 17 13:45:01.547: INFO: Pod "pod-8c392b31-f4c2-4df1-a9e5-c31eef5c67b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.798381ms Apr 17 13:45:03.551: INFO: Pod "pod-8c392b31-f4c2-4df1-a9e5-c31eef5c67b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006980375s �[1mSTEP�[0m: Saw pod success Apr 17 13:45:03.551: INFO: Pod "pod-8c392b31-f4c2-4df1-a9e5-c31eef5c67b2" satisfied condition "Succeeded or Failed" Apr 17 13:45:03.554: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-8c392b31-f4c2-4df1-a9e5-c31eef5c67b2 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:45:03.576: INFO: Waiting for pod pod-8c392b31-f4c2-4df1-a9e5-c31eef5c67b2 to disappear Apr 17 13:45:03.578: INFO: Pod pod-8c392b31-f4c2-4df1-a9e5-c31eef5c67b2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:45:03.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2857" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":15,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:45:03.630: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Apr 17 13:45:03.660: INFO: Waiting up to 5m0s for pod "pod-a21bbb5e-656f-4ac9-8643-23b222ffc61c" in namespace "emptydir-8818" to be "Succeeded or Failed" Apr 17 13:45:03.663: INFO: Pod "pod-a21bbb5e-656f-4ac9-8643-23b222ffc61c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.549216ms Apr 17 13:45:05.667: INFO: Pod "pod-a21bbb5e-656f-4ac9-8643-23b222ffc61c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006478856s �[1mSTEP�[0m: Saw pod success Apr 17 13:45:05.667: INFO: Pod "pod-a21bbb5e-656f-4ac9-8643-23b222ffc61c" satisfied condition "Succeeded or Failed" Apr 17 13:45:05.670: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-a21bbb5e-656f-4ac9-8643-23b222ffc61c container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:45:05.686: INFO: Waiting for pod pod-a21bbb5e-656f-4ac9-8643-23b222ffc61c to disappear Apr 17 13:45:05.688: INFO: Pod pod-a21bbb5e-656f-4ac9-8643-23b222ffc61c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:45:05.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8818" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":52,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:45:05.701: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 17 13:45:05.737: INFO: Waiting up to 5m0s for pod "security-context-c9cffb2a-438e-4460-96dc-e6d6011b7158" in namespace "security-context-5265" to be "Succeeded or Failed" Apr 17 13:45:05.740: INFO: Pod "security-context-c9cffb2a-438e-4460-96dc-e6d6011b7158": Phase="Pending", Reason="", readiness=false. Elapsed: 2.87387ms Apr 17 13:45:07.744: INFO: Pod "security-context-c9cffb2a-438e-4460-96dc-e6d6011b7158": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00739439s �[1mSTEP�[0m: Saw pod success Apr 17 13:45:07.744: INFO: Pod "security-context-c9cffb2a-438e-4460-96dc-e6d6011b7158" satisfied condition "Succeeded or Failed" Apr 17 13:45:07.747: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod security-context-c9cffb2a-438e-4460-96dc-e6d6011b7158 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:45:07.758: INFO: Waiting for pod security-context-c9cffb2a-438e-4460-96dc-e6d6011b7158 to disappear Apr 17 13:45:07.760: INFO: Pod security-context-c9cffb2a-438e-4460-96dc-e6d6011b7158 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:45:07.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-5265" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":56,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":25,"skipped":484,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:44:50.845: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:44:51.265: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:44:54.286: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Apr 17 13:45:04.304: INFO: Waiting for webhook configuration to be ready... Apr 17 13:45:14.416: INFO: Waiting for webhook configuration to be ready... Apr 17 13:45:24.522: INFO: Waiting for webhook configuration to be ready... Apr 17 13:45:34.615: INFO: Waiting for webhook configuration to be ready... Apr 17 13:45:44.624: INFO: Waiting for webhook configuration to be ready... Apr 17 13:45:44.625: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForAttachingPod(0xc0009d91e0, {0xc0045ea1c0, 0xc}, 0xc002cb6870, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:207 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000232d00, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:45:44.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7110" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7110-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.892 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny attaching pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:45:44.625: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":25,"skipped":484,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:45:44.740: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:45:45.184: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:45:48.212: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Apr 17 13:45:50.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-1056 attach --namespace=webhook-1056 to-be-attached-pod -i -c=container1' Apr 17 13:45:50.346: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:45:50.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1056" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1056-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":26,"skipped":484,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:45:07.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:07.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9136" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":79,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:07.888: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Apr 17 13:46:10.449: INFO: Successfully updated pod "adopt-release-mxc2h" �[1mSTEP�[0m: Checking that the Job readopts the Pod Apr 17 13:46:10.449: INFO: Waiting up to 15m0s for pod "adopt-release-mxc2h" in namespace "job-5213" to be "adopted" Apr 17 13:46:10.455: INFO: Pod "adopt-release-mxc2h": Phase="Running", Reason="", readiness=true. Elapsed: 5.843545ms Apr 17 13:46:12.460: INFO: Pod "adopt-release-mxc2h": Phase="Running", Reason="", readiness=true. Elapsed: 2.010451332s Apr 17 13:46:12.460: INFO: Pod "adopt-release-mxc2h" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Apr 17 13:46:12.969: INFO: Successfully updated pod "adopt-release-mxc2h" �[1mSTEP�[0m: Checking that the Job releases the Pod Apr 17 13:46:12.969: INFO: Waiting up to 15m0s for pod "adopt-release-mxc2h" in namespace "job-5213" to be "released" Apr 17 13:46:12.972: INFO: Pod "adopt-release-mxc2h": Phase="Running", Reason="", readiness=true. Elapsed: 2.619933ms Apr 17 13:46:14.976: INFO: Pod "adopt-release-mxc2h": Phase="Running", Reason="", readiness=true. Elapsed: 2.006974885s Apr 17 13:46:14.976: INFO: Pod "adopt-release-mxc2h" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:14.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-5213" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":7,"skipped":104,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:15.025: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption is created Apr 17 13:46:15.068: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:46:17.072: INFO: The status of Pod pod-adoption is Running (Ready = true) �[1mSTEP�[0m: When a replication controller with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:18.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-5189" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":8,"skipped":136,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:18.103: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create set of pod templates Apr 17 13:46:18.130: INFO: created test-podtemplate-1 Apr 17 13:46:18.133: INFO: created test-podtemplate-2 Apr 17 13:46:18.138: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Apr 17 13:46:18.141: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Apr 17 13:46:18.156: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:18.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-3966" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":9,"skipped":143,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:18.169: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Apr 17 13:46:18.205: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:46:20.216: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.4 on the node which pod1 resides and expect scheduled Apr 17 13:46:20.224: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:46:22.228: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.4 but use UDP protocol on the node which pod2 resides Apr 17 13:46:22.237: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:46:24.242: INFO: The status of Pod pod3 is Running (Ready = true) Apr 17 13:46:24.252: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:46:26.256: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Apr 17 13:46:26.259: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.4 http://127.0.0.1:54323/hostname] Namespace:hostport-8066 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:46:26.259: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:46:26.259: INFO: ExecWithOptions: Clientset creation Apr 17 13:46:26.260: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-8066/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.4+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 Apr 17 13:46:26.366: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.4:54323/hostname] Namespace:hostport-8066 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:46:26.366: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:46:26.367: INFO: ExecWithOptions: Clientset creation Apr 17 13:46:26.367: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-8066/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.4%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 UDP Apr 17 13:46:26.443: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.4 54323] Namespace:hostport-8066 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:46:26.443: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:46:26.444: INFO: ExecWithOptions: Clientset creation Apr 17 13:46:26.444: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-8066/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.4+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:31.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-8066" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":143,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:31.606: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-86678d5f-1364-470f-8429-bbb9262dbb47 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:46:31.646: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7dc2ea6c-693f-4fab-9e6f-2bb6b4c080ec" in namespace "projected-3126" to be "Succeeded or Failed" Apr 17 13:46:31.649: INFO: Pod "pod-projected-configmaps-7dc2ea6c-693f-4fab-9e6f-2bb6b4c080ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.770034ms Apr 17 13:46:33.652: INFO: Pod "pod-projected-configmaps-7dc2ea6c-693f-4fab-9e6f-2bb6b4c080ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006271478s �[1mSTEP�[0m: Saw pod success Apr 17 13:46:33.652: INFO: Pod "pod-projected-configmaps-7dc2ea6c-693f-4fab-9e6f-2bb6b4c080ec" satisfied condition "Succeeded or Failed" Apr 17 13:46:33.655: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-projected-configmaps-7dc2ea6c-693f-4fab-9e6f-2bb6b4c080ec container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:46:33.668: INFO: Waiting for pod pod-projected-configmaps-7dc2ea6c-693f-4fab-9e6f-2bb6b4c080ec to disappear Apr 17 13:46:33.670: INFO: Pod pod-projected-configmaps-7dc2ea6c-693f-4fab-9e6f-2bb6b4c080ec no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:33.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3126" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":178,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:45:50.432: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod busybox-b1375eab-f87a-4259-bb5d-876d31c2eb61 in namespace container-probe-9629 Apr 17 13:45:52.509: INFO: Started pod busybox-b1375eab-f87a-4259-bb5d-876d31c2eb61 in namespace container-probe-9629 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 17 13:45:52.512: INFO: Initial restart count of pod busybox-b1375eab-f87a-4259-bb5d-876d31c2eb61 is 0 Apr 17 13:46:42.622: INFO: Restart count of pod container-probe-9629/busybox-b1375eab-f87a-4259-bb5d-876d31c2eb61 is now 1 (50.109949657s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:42.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-9629" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":486,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:42.664: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-6e0286c6-01dd-469d-b792-97f69f5c0622 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 17 13:46:42.702: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d654310-e15e-4629-a64d-5ce81b8ffd92" in namespace "projected-7830" to be "Succeeded or Failed" Apr 17 13:46:42.706: INFO: Pod "pod-projected-secrets-9d654310-e15e-4629-a64d-5ce81b8ffd92": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352127ms Apr 17 13:46:44.710: INFO: Pod "pod-projected-secrets-9d654310-e15e-4629-a64d-5ce81b8ffd92": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007958624s �[1mSTEP�[0m: Saw pod success Apr 17 13:46:44.710: INFO: Pod "pod-projected-secrets-9d654310-e15e-4629-a64d-5ce81b8ffd92" satisfied condition "Succeeded or Failed" Apr 17 13:46:44.713: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-projected-secrets-9d654310-e15e-4629-a64d-5ce81b8ffd92 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:46:44.726: INFO: Waiting for pod pod-projected-secrets-9d654310-e15e-4629-a64d-5ce81b8ffd92 to disappear Apr 17 13:46:44.728: INFO: Pod pod-projected-secrets-9d654310-e15e-4629-a64d-5ce81b8ffd92 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:44.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7830" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":509,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:44.743: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-ab90a463-6795-4a49-b2e6-eb447e58caf0 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-ed2d2c3f-0efc-408c-ad0c-5e09efd03818 �[1mSTEP�[0m: Creating the pod Apr 17 13:46:44.799: INFO: The status of Pod pod-secrets-cf614451-a62b-4140-abcb-ba85d69b4168 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:46:46.804: INFO: The status of Pod pod-secrets-cf614451-a62b-4140-abcb-ba85d69b4168 is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-ab90a463-6795-4a49-b2e6-eb447e58caf0 �[1mSTEP�[0m: Updating secret s-test-opt-upd-ed2d2c3f-0efc-408c-ad0c-5e09efd03818 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-b3469b82-e902-4606-9240-c745d0df5d45 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:50.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5196" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":513,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:50.884: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1331 �[1mSTEP�[0m: creating the pod Apr 17 13:46:50.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 create -f -' Apr 17 13:46:51.114: INFO: stderr: "" Apr 17 13:46:51.114: INFO: stdout: "pod/pause created\n" Apr 17 13:46:51.114: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 17 13:46:51.114: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9580" to be "running and ready" Apr 17 13:46:51.117: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.897238ms Apr 17 13:46:53.121: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.006470546s Apr 17 13:46:53.121: INFO: Pod "pause" satisfied condition "running and ready" Apr 17 13:46:53.121: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: adding the label testing-label with value testing-label-value to a pod Apr 17 13:46:53.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 label pods pause testing-label=testing-label-value' Apr 17 13:46:53.199: INFO: stderr: "" Apr 17 13:46:53.199: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod has the label testing-label with the value testing-label-value Apr 17 13:46:53.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 get pod pause -L testing-label' Apr 17 13:46:53.266: INFO: stderr: "" Apr 17 13:46:53.266: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" �[1mSTEP�[0m: removing the label testing-label of a pod Apr 17 13:46:53.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 label pods pause testing-label-' Apr 17 13:46:53.339: INFO: stderr: "" Apr 17 13:46:53.339: INFO: stdout: "pod/pause unlabeled\n" �[1mSTEP�[0m: verifying the pod doesn't have the label testing-label Apr 17 13:46:53.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 get pod pause -L testing-label' Apr 17 13:46:53.402: INFO: stderr: "" Apr 17 13:46:53.403: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1337 �[1mSTEP�[0m: using delete to clean up resources Apr 17 13:46:53.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 delete --grace-period=0 --force -f -' Apr 17 13:46:53.473: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 13:46:53.473: INFO: stdout: "pod \"pause\" force deleted\n" Apr 17 13:46:53.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 get rc,svc -l name=pause --no-headers' Apr 17 13:46:53.547: INFO: stderr: "No resources found in kubectl-9580 namespace.\n" Apr 17 13:46:53.547: INFO: stdout: "" Apr 17 13:46:53.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9580 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 13:46:53.612: INFO: stderr: "" Apr 17 13:46:53.612: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:53.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9580" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":30,"skipped":521,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:53.629: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-02f596da-1d95-447b-a64a-d86ad81cfaa7 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:46:53.662: INFO: Waiting up to 5m0s for pod "pod-configmaps-3aac9f7a-c3ce-4645-b335-cafc67d42e58" in namespace "configmap-8335" to be "Succeeded or Failed" Apr 17 13:46:53.666: INFO: Pod "pod-configmaps-3aac9f7a-c3ce-4645-b335-cafc67d42e58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.792259ms Apr 17 13:46:55.670: INFO: Pod "pod-configmaps-3aac9f7a-c3ce-4645-b335-cafc67d42e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007751042s �[1mSTEP�[0m: Saw pod success Apr 17 13:46:55.670: INFO: Pod "pod-configmaps-3aac9f7a-c3ce-4645-b335-cafc67d42e58" satisfied condition "Succeeded or Failed" Apr 17 13:46:55.673: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-configmaps-3aac9f7a-c3ce-4645-b335-cafc67d42e58 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:46:55.699: INFO: Waiting for pod pod-configmaps-3aac9f7a-c3ce-4645-b335-cafc67d42e58 to disappear Apr 17 13:46:55.702: INFO: Pod pod-configmaps-3aac9f7a-c3ce-4645-b335-cafc67d42e58 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:55.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8335" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":526,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:55.771: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating Agnhost RC Apr 17 13:46:55.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4861 create -f -' Apr 17 13:46:56.114: INFO: stderr: "" Apr 17 13:46:56.114: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 17 13:46:57.119: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 13:46:57.119: INFO: Found 0 / 1 Apr 17 13:46:58.117: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 13:46:58.117: INFO: Found 1 / 1 Apr 17 13:46:58.117: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Apr 17 13:46:58.122: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 13:46:58.122: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 17 13:46:58.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4861 patch pod agnhost-primary-z8dj2 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 17 13:46:58.202: INFO: stderr: "" Apr 17 13:46:58.202: INFO: stdout: "pod/agnhost-primary-z8dj2 patched\n" �[1mSTEP�[0m: checking annotations Apr 17 13:46:58.205: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 13:46:58.205: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:46:58.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4861" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":32,"skipped":570,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:33.726: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3869 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 17 13:46:33.750: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 13:46:33.793: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:46:35.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:37.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:39.800: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:41.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:43.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:45.799: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:47.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:49.797: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:51.798: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:46:53.796: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 13:46:53.801: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 17 13:46:53.810: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 17 13:46:53.815: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 17 13:46:55.841: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 17 13:46:55.841: INFO: Going to poll 192.168.2.22 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 17 13:46:55.843: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.22 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3869 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:46:55.843: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:46:55.844: INFO: ExecWithOptions: Clientset creation Apr 17 13:46:55.844: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3869/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.2.22+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:46:56.967: INFO: Found all 1 expected endpoints: [netserver-0] Apr 17 13:46:56.967: INFO: Going to poll 192.168.0.23 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 17 13:46:56.970: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.23 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3869 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:46:56.970: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:46:56.971: INFO: ExecWithOptions: Clientset creation Apr 17 13:46:56.971: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3869/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.0.23+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:46:58.043: INFO: Found all 1 expected endpoints: [netserver-1] Apr 17 13:46:58.043: INFO: Going to poll 192.168.3.22 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 17 13:46:58.046: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.3.22 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3869 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:46:58.046: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:46:58.047: INFO: ExecWithOptions: Clientset creation Apr 17 13:46:58.047: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3869/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.3.22+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:46:59.133: INFO: Found all 1 expected endpoints: [netserver-2] Apr 17 13:46:59.133: INFO: Going to poll 192.168.6.10 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 17 13:46:59.136: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.6.10 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3869 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:46:59.136: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:46:59.137: INFO: ExecWithOptions: Clientset creation Apr 17 13:46:59.137: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3869/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.6.10+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:47:00.232: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:00.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-3869" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:46:58.215: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a Deployment Apr 17 13:46:58.252: INFO: Creating simple deployment test-deployment-btp65 Apr 17 13:46:58.264: INFO: deployment "test-deployment-btp65" doesn't have the required revision set �[1mSTEP�[0m: Getting /status Apr 17 13:47:00.284: INFO: Deployment test-deployment-btp65 has Conditions: [{Available True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-btp65-764bc7c4b7" has successfully progressed.}] �[1mSTEP�[0m: updating Deployment Status Apr 17 13:47:00.297: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 46, 59, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 46, 59, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 46, 59, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 46, 58, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-btp65-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Deployment status to be updated Apr 17 13:47:00.300: INFO: Observed &Deployment event: ADDED Apr 17 13:47:00.300: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-btp65-764bc7c4b7"} Apr 17 13:47:00.300: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.300: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-btp65-764bc7c4b7"} Apr 17 13:47:00.300: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 17 13:47:00.300: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.300: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 17 13:47:00.300: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-btp65-764bc7c4b7" is progressing.} Apr 17 13:47:00.300: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.300: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 17 13:47:00.300: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-btp65-764bc7c4b7" has successfully progressed.} Apr 17 13:47:00.300: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.301: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 17 13:47:00.301: INFO: Observed Deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-btp65-764bc7c4b7" has successfully progressed.} Apr 17 13:47:00.301: INFO: Found Deployment test-deployment-btp65 in namespace deployment-7265 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 17 13:47:00.301: INFO: Deployment test-deployment-btp65 has an updated status �[1mSTEP�[0m: patching the Statefulset Status Apr 17 13:47:00.301: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Apr 17 13:47:00.308: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Deployment status to be patched Apr 17 13:47:00.310: INFO: Observed &Deployment event: ADDED Apr 17 13:47:00.310: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-btp65-764bc7c4b7"} Apr 17 13:47:00.310: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.310: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-btp65-764bc7c4b7"} Apr 17 13:47:00.310: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 17 13:47:00.310: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.310: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 17 13:47:00.311: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:58 +0000 UTC 2022-04-17 13:46:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-btp65-764bc7c4b7" is progressing.} Apr 17 13:47:00.311: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.311: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 17 13:47:00.311: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-btp65-764bc7c4b7" has successfully progressed.} Apr 17 13:47:00.311: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.311: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:59 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 17 13:47:00.311: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-17 13:46:59 +0000 UTC 2022-04-17 13:46:58 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-btp65-764bc7c4b7" has successfully progressed.} Apr 17 13:47:00.311: INFO: Observed deployment test-deployment-btp65 in namespace deployment-7265 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 17 13:47:00.311: INFO: Observed &Deployment event: MODIFIED Apr 17 13:47:00.311: INFO: Found deployment test-deployment-btp65 in namespace deployment-7265 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Apr 17 13:47:00.311: INFO: Deployment test-deployment-btp65 has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 17 13:47:00.316: INFO: Deployment "test-deployment-btp65": &Deployment{ObjectMeta:{test-deployment-btp65 deployment-7265 a97e7e01-1de7-4279-8796-8295eb6058da 6079 1 2022-04-17 13:46:58 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-17 13:46:58 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-17 13:46:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2022-04-17 13:47:00 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004140d38 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 17 13:47:00.319: INFO: New ReplicaSet "test-deployment-btp65-764bc7c4b7" of Deployment "test-deployment-btp65": &ReplicaSet{ObjectMeta:{test-deployment-btp65-764bc7c4b7 deployment-7265 b8de6404-05f0-4a33-83b2-adbae9f55840 6063 1 2022-04-17 13:46:58 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-btp65 a97e7e01-1de7-4279-8796-8295eb6058da 0xc0041410d7 0xc0041410d8}] [] [{kube-controller-manager Update apps/v1 2022-04-17 13:46:58 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a97e7e01-1de7-4279-8796-8295eb6058da\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-17 13:46:59 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004141188 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 17 13:47:00.321: INFO: Pod "test-deployment-btp65-764bc7c4b7-8lt5v" is available: &Pod{ObjectMeta:{test-deployment-btp65-764bc7c4b7-8lt5v test-deployment-btp65-764bc7c4b7- deployment-7265 0fc19937-825c-4524-ba49-28a234be24bf 6062 0 2022-04-17 13:46:58 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-btp65-764bc7c4b7 b8de6404-05f0-4a33-83b2-adbae9f55840 0xc003fdb2f7 0xc003fdb2f8}] [] [{kube-controller-manager Update v1 2022-04-17 13:46:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b8de6404-05f0-4a33-83b2-adbae9f55840\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-17 13:46:59 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5c4xd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5c4xd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:46:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:46:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:46:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:46:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.2.25,StartTime:2022-04-17 13:46:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-17 13:46:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4c68ad98067a934c1cced9d5f198a92b2080789bb9547489513f3365a542b32b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:00.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7265" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":33,"skipped":571,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":219,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:00.243: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:47:00.279: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51e6dbd6-1ef5-464e-87da-65d67aca904d" in namespace "downward-api-3643" to be "Succeeded or Failed" Apr 17 13:47:00.283: INFO: Pod "downwardapi-volume-51e6dbd6-1ef5-464e-87da-65d67aca904d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.439061ms Apr 17 13:47:02.287: INFO: Pod "downwardapi-volume-51e6dbd6-1ef5-464e-87da-65d67aca904d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007565376s �[1mSTEP�[0m: Saw pod success Apr 17 13:47:02.287: INFO: Pod "downwardapi-volume-51e6dbd6-1ef5-464e-87da-65d67aca904d" satisfied condition "Succeeded or Failed" Apr 17 13:47:02.290: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod downwardapi-volume-51e6dbd6-1ef5-464e-87da-65d67aca904d container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:47:02.304: INFO: Waiting for pod downwardapi-volume-51e6dbd6-1ef5-464e-87da-65d67aca904d to disappear Apr 17 13:47:02.306: INFO: Pod downwardapi-volume-51e6dbd6-1ef5-464e-87da-65d67aca904d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:02.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3643" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":219,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:00.365: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:47:00.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-12472bcf-b0a0-4caa-9b39-aedf1af0f792" in namespace "projected-8114" to be "Succeeded or Failed" Apr 17 13:47:00.401: INFO: Pod "downwardapi-volume-12472bcf-b0a0-4caa-9b39-aedf1af0f792": Phase="Pending", Reason="", readiness=false. Elapsed: 2.439166ms Apr 17 13:47:02.405: INFO: Pod "downwardapi-volume-12472bcf-b0a0-4caa-9b39-aedf1af0f792": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006362396s �[1mSTEP�[0m: Saw pod success Apr 17 13:47:02.405: INFO: Pod "downwardapi-volume-12472bcf-b0a0-4caa-9b39-aedf1af0f792" satisfied condition "Succeeded or Failed" Apr 17 13:47:02.407: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod downwardapi-volume-12472bcf-b0a0-4caa-9b39-aedf1af0f792 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:47:02.429: INFO: Waiting for pod downwardapi-volume-12472bcf-b0a0-4caa-9b39-aedf1af0f792 to disappear Apr 17 13:47:02.432: INFO: Pod downwardapi-volume-12472bcf-b0a0-4caa-9b39-aedf1af0f792 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:02.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8114" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":595,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:02.454: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:47:02.501: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0008d957-047c-4e6f-ab9f-e962860af96e" in namespace "projected-7692" to be "Succeeded or Failed" Apr 17 13:47:02.504: INFO: Pod "downwardapi-volume-0008d957-047c-4e6f-ab9f-e962860af96e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.857033ms Apr 17 13:47:04.510: INFO: Pod "downwardapi-volume-0008d957-047c-4e6f-ab9f-e962860af96e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009024534s �[1mSTEP�[0m: Saw pod success Apr 17 13:47:04.511: INFO: Pod "downwardapi-volume-0008d957-047c-4e6f-ab9f-e962860af96e" satisfied condition "Succeeded or Failed" Apr 17 13:47:04.514: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod downwardapi-volume-0008d957-047c-4e6f-ab9f-e962860af96e container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:47:04.534: INFO: Waiting for pod downwardapi-volume-0008d957-047c-4e6f-ab9f-e962860af96e to disappear Apr 17 13:47:04.536: INFO: Pod downwardapi-volume-0008d957-047c-4e6f-ab9f-e962860af96e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:04.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7692" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":602,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:04.547: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-2563 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-2563 I0417 13:47:04.622786 19 runners.go:193] Created replication controller with name: externalname-service, namespace: services-2563, replica count: 2 I0417 13:47:07.673916 19 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:47:07.674: INFO: Creating new exec pod Apr 17 13:47:10.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2563 exec execpodwb85k -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 17 13:47:10.841: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 17 13:47:10.841: INFO: stdout: "" Apr 17 13:47:11.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2563 exec execpodwb85k -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 17 13:47:11.981: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 17 13:47:11.981: INFO: stdout: "externalname-service-b2gvq" Apr 17 13:47:11.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2563 exec execpodwb85k -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.136.24 80' Apr 17 13:47:12.114: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.135.136.24 80\nConnection to 10.135.136.24 80 port [tcp/http] succeeded!\n" Apr 17 13:47:12.114: INFO: stdout: "externalname-service-b2gvq" Apr 17 13:47:12.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2563 exec execpodwb85k -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 32375' Apr 17 13:47:12.251: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 32375\nConnection to 172.18.0.6 32375 port [tcp/*] succeeded!\n" Apr 17 13:47:12.251: INFO: stdout: "externalname-service-26dbx" Apr 17 13:47:12.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2563 exec execpodwb85k -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 32375' Apr 17 13:47:12.420: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 32375\nConnection to 172.18.0.5 32375 port [tcp/*] succeeded!\n" Apr 17 13:47:12.421: INFO: stdout: "externalname-service-b2gvq" Apr 17 13:47:12.421: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:12.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2563" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":36,"skipped":602,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:12.526: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should delete a collection of services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a collection of services Apr 17 13:47:12.566: INFO: Creating e2e-svc-a-sxjfs Apr 17 13:47:12.581: INFO: Creating e2e-svc-b-cgpx9 Apr 17 13:47:12.599: INFO: Creating e2e-svc-c-jzmrq �[1mSTEP�[0m: deleting service collection Apr 17 13:47:12.647: INFO: Collection of services has been deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:12.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7477" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":37,"skipped":631,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:02.335: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-6297 [It] Should recreate evicted statefulset [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Looking for a node to schedule stateful set and pod �[1mSTEP�[0m: Creating pod with conflicting port in namespace statefulset-6297 �[1mSTEP�[0m: Waiting until pod test-pod will start running in namespace statefulset-6297 �[1mSTEP�[0m: Creating statefulset with conflicting port in namespace statefulset-6297 �[1mSTEP�[0m: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-6297 Apr 17 13:47:08.421: INFO: Observed stateful pod in namespace: statefulset-6297, name: ss-0, uid: ddb1dc8a-3fb9-4a7e-ba17-534423c7ed8b, status phase: Pending. Waiting for statefulset controller to delete. Apr 17 13:47:08.432: INFO: Observed stateful pod in namespace: statefulset-6297, name: ss-0, uid: ddb1dc8a-3fb9-4a7e-ba17-534423c7ed8b, status phase: Failed. Waiting for statefulset controller to delete. Apr 17 13:47:08.439: INFO: Observed stateful pod in namespace: statefulset-6297, name: ss-0, uid: ddb1dc8a-3fb9-4a7e-ba17-534423c7ed8b, status phase: Failed. Waiting for statefulset controller to delete. Apr 17 13:47:08.444: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-6297 �[1mSTEP�[0m: Removing pod with conflicting port in namespace statefulset-6297 �[1mSTEP�[0m: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-6297 and will be in running state [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 17 13:47:10.472: INFO: Deleting all statefulset in ns statefulset-6297 Apr 17 13:47:10.475: INFO: Scaling statefulset ss to 0 Apr 17 13:47:20.493: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 13:47:20.499: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:20.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6297" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":-1,"completed":14,"skipped":235,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:20.587: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:20.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-4673" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":15,"skipped":254,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:12.676: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:47:12.706: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-8243 I0417 13:47:12.715121 19 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8243, replica count: 1 I0417 13:47:13.766831 19 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:47:13.879: INFO: Created: latency-svc-cvqlf Apr 17 13:47:13.889: INFO: Got endpoints: latency-svc-cvqlf [21.455157ms] Apr 17 13:47:13.901: INFO: Created: latency-svc-k86lq Apr 17 13:47:13.908: INFO: Got endpoints: latency-svc-k86lq [18.833686ms] Apr 17 13:47:13.911: INFO: Created: latency-svc-jwhmp Apr 17 13:47:13.913: INFO: Got endpoints: latency-svc-jwhmp [23.860235ms] Apr 17 13:47:13.922: INFO: Created: latency-svc-cqt9g Apr 17 13:47:13.925: INFO: Got endpoints: latency-svc-cqt9g [36.417779ms] Apr 17 13:47:13.930: INFO: Created: latency-svc-zklsq Apr 17 13:47:13.934: INFO: Got endpoints: latency-svc-zklsq [44.507849ms] Apr 17 13:47:13.938: INFO: Created: latency-svc-lgc74 Apr 17 13:47:13.945: INFO: Got endpoints: latency-svc-lgc74 [55.978252ms] Apr 17 13:47:13.951: INFO: Created: latency-svc-zpmqz Apr 17 13:47:13.959: INFO: Got endpoints: latency-svc-zpmqz [70.140546ms] Apr 17 13:47:13.960: INFO: Created: latency-svc-m5hvs Apr 17 13:47:13.970: INFO: Got endpoints: latency-svc-m5hvs [81.007636ms] Apr 17 13:47:13.973: INFO: Created: latency-svc-w82gn Apr 17 13:47:13.978: INFO: Got endpoints: latency-svc-w82gn [88.651451ms] Apr 17 13:47:13.984: INFO: Created: latency-svc-5p57r Apr 17 13:47:13.987: INFO: Got endpoints: latency-svc-5p57r [97.302282ms] Apr 17 13:47:13.997: INFO: Created: latency-svc-k76qs Apr 17 13:47:14.004: INFO: Got endpoints: latency-svc-k76qs [114.96319ms] Apr 17 13:47:14.007: INFO: Created: latency-svc-7mf54 Apr 17 13:47:14.011: INFO: Got endpoints: latency-svc-7mf54 [121.975359ms] Apr 17 13:47:14.015: INFO: Created: latency-svc-jxlwt Apr 17 13:47:14.025: INFO: Created: latency-svc-9x42z Apr 17 13:47:14.025: INFO: Got endpoints: latency-svc-jxlwt [135.695116ms] Apr 17 13:47:14.028: INFO: Got endpoints: latency-svc-9x42z [138.438322ms] Apr 17 13:47:14.035: INFO: Created: latency-svc-mwlzb Apr 17 13:47:14.038: INFO: Got endpoints: latency-svc-mwlzb [148.533251ms] Apr 17 13:47:14.043: INFO: Created: latency-svc-95rjj Apr 17 13:47:14.047: INFO: Got endpoints: latency-svc-95rjj [158.116151ms] Apr 17 13:47:14.054: INFO: Created: latency-svc-5dqtj Apr 17 13:47:14.058: INFO: Got endpoints: latency-svc-5dqtj [149.706557ms] Apr 17 13:47:14.063: INFO: Created: latency-svc-mccwp Apr 17 13:47:14.069: INFO: Got endpoints: latency-svc-mccwp [155.446018ms] Apr 17 13:47:14.075: INFO: Created: latency-svc-7gd2h Apr 17 13:47:14.082: INFO: Got endpoints: latency-svc-7gd2h [156.361121ms] Apr 17 13:47:14.085: INFO: Created: latency-svc-wbfnh Apr 17 13:47:14.094: INFO: Got endpoints: latency-svc-wbfnh [160.212898ms] Apr 17 13:47:14.097: INFO: Created: latency-svc-lmd7q Apr 17 13:47:14.107: INFO: Got endpoints: latency-svc-lmd7q [161.725307ms] Apr 17 13:47:14.111: INFO: Created: latency-svc-f92vm Apr 17 13:47:14.115: INFO: Got endpoints: latency-svc-f92vm [155.012929ms] Apr 17 13:47:14.120: INFO: Created: latency-svc-rjwp8 Apr 17 13:47:14.128: INFO: Got endpoints: latency-svc-rjwp8 [157.646461ms] Apr 17 13:47:14.131: INFO: Created: latency-svc-9j7fw Apr 17 13:47:14.136: INFO: Got endpoints: latency-svc-9j7fw [158.176495ms] Apr 17 13:47:14.138: INFO: Created: latency-svc-fh84j Apr 17 13:47:14.143: INFO: Got endpoints: latency-svc-fh84j [156.277356ms] Apr 17 13:47:14.150: INFO: Created: latency-svc-bz647 Apr 17 13:47:14.156: INFO: Got endpoints: latency-svc-bz647 [151.421331ms] Apr 17 13:47:14.161: INFO: Created: latency-svc-kzg45 Apr 17 13:47:14.163: INFO: Got endpoints: latency-svc-kzg45 [151.113027ms] Apr 17 13:47:14.168: INFO: Created: latency-svc-gbblz Apr 17 13:47:14.176: INFO: Got endpoints: latency-svc-gbblz [150.481816ms] Apr 17 13:47:14.180: INFO: Created: latency-svc-vd4mz Apr 17 13:47:14.184: INFO: Got endpoints: latency-svc-vd4mz [155.589671ms] Apr 17 13:47:14.193: INFO: Created: latency-svc-jznln Apr 17 13:47:14.197: INFO: Got endpoints: latency-svc-jznln [159.016812ms] Apr 17 13:47:14.204: INFO: Created: latency-svc-4qgmx Apr 17 13:47:14.220: INFO: Got endpoints: latency-svc-4qgmx [172.708762ms] Apr 17 13:47:14.225: INFO: Created: latency-svc-q26v2 Apr 17 13:47:14.230: INFO: Got endpoints: latency-svc-q26v2 [172.344845ms] Apr 17 13:47:14.238: INFO: Created: latency-svc-5kmhx Apr 17 13:47:14.241: INFO: Got endpoints: latency-svc-5kmhx [171.886252ms] Apr 17 13:47:14.254: INFO: Created: latency-svc-hfvcw Apr 17 13:47:14.259: INFO: Got endpoints: latency-svc-hfvcw [177.491354ms] Apr 17 13:47:14.272: INFO: Created: latency-svc-v6b89 Apr 17 13:47:14.280: INFO: Got endpoints: latency-svc-v6b89 [185.401708ms] Apr 17 13:47:14.296: INFO: Created: latency-svc-x5ft5 Apr 17 13:47:14.301: INFO: Got endpoints: latency-svc-x5ft5 [193.676649ms] Apr 17 13:47:14.313: INFO: Created: latency-svc-r7grp Apr 17 13:47:14.319: INFO: Got endpoints: latency-svc-r7grp [203.723364ms] Apr 17 13:47:14.341: INFO: Created: latency-svc-8w679 Apr 17 13:47:14.360: INFO: Got endpoints: latency-svc-8w679 [232.258249ms] Apr 17 13:47:14.361: INFO: Created: latency-svc-gfwfk Apr 17 13:47:14.373: INFO: Got endpoints: latency-svc-gfwfk [236.652132ms] Apr 17 13:47:14.379: INFO: Created: latency-svc-zvvbp Apr 17 13:47:14.387: INFO: Got endpoints: latency-svc-zvvbp [243.299716ms] Apr 17 13:47:14.388: INFO: Created: latency-svc-kl62x Apr 17 13:47:14.395: INFO: Created: latency-svc-5sfxz Apr 17 13:47:14.403: INFO: Created: latency-svc-qpd8b Apr 17 13:47:14.409: INFO: Created: latency-svc-ngqhb Apr 17 13:47:14.417: INFO: Created: latency-svc-wx5p8 Apr 17 13:47:14.423: INFO: Created: latency-svc-ns48w Apr 17 13:47:14.450: INFO: Created: latency-svc-fxk4j Apr 17 13:47:14.451: INFO: Got endpoints: latency-svc-kl62x [294.817446ms] Apr 17 13:47:14.457: INFO: Created: latency-svc-v56gk Apr 17 13:47:14.464: INFO: Created: latency-svc-7vmmz Apr 17 13:47:14.490: INFO: Got endpoints: latency-svc-5sfxz [327.055882ms] Apr 17 13:47:14.537: INFO: Got endpoints: latency-svc-qpd8b [361.820284ms] Apr 17 13:47:14.562: INFO: Created: latency-svc-b6m4v Apr 17 13:47:14.566: INFO: Created: latency-svc-vdpnl Apr 17 13:47:14.566: INFO: Created: latency-svc-kz5nt Apr 17 13:47:14.574: INFO: Created: latency-svc-mmpbj Apr 17 13:47:14.574: INFO: Created: latency-svc-zcnhx Apr 17 13:47:14.574: INFO: Created: latency-svc-9cl64 Apr 17 13:47:14.574: INFO: Created: latency-svc-bxcw7 Apr 17 13:47:14.574: INFO: Created: latency-svc-js94m Apr 17 13:47:14.577: INFO: Created: latency-svc-r66x8 Apr 17 13:47:14.584: INFO: Got endpoints: latency-svc-ngqhb [400.734405ms] Apr 17 13:47:14.595: INFO: Created: latency-svc-xd2pj Apr 17 13:47:14.636: INFO: Got endpoints: latency-svc-wx5p8 [438.903213ms] Apr 17 13:47:14.647: INFO: Created: latency-svc-k58w4 Apr 17 13:47:14.688: INFO: Got endpoints: latency-svc-ns48w [467.521588ms] Apr 17 13:47:14.701: INFO: Created: latency-svc-f2x7x Apr 17 13:47:14.736: INFO: Got endpoints: latency-svc-fxk4j [505.471325ms] Apr 17 13:47:14.746: INFO: Created: latency-svc-hk8c8 Apr 17 13:47:14.785: INFO: Got endpoints: latency-svc-v56gk [544.35098ms] Apr 17 13:47:14.795: INFO: Created: latency-svc-x228b Apr 17 13:47:14.838: INFO: Got endpoints: latency-svc-7vmmz [577.447986ms] Apr 17 13:47:14.850: INFO: Created: latency-svc-k6xxz Apr 17 13:47:14.885: INFO: Got endpoints: latency-svc-b6m4v [434.579557ms] Apr 17 13:47:14.896: INFO: Created: latency-svc-5d294 Apr 17 13:47:14.938: INFO: Got endpoints: latency-svc-vdpnl [636.579041ms] Apr 17 13:47:14.948: INFO: Created: latency-svc-2hlvt Apr 17 13:47:14.985: INFO: Got endpoints: latency-svc-kz5nt [705.431366ms] Apr 17 13:47:14.996: INFO: Created: latency-svc-przbb Apr 17 13:47:15.035: INFO: Got endpoints: latency-svc-bxcw7 [497.661114ms] Apr 17 13:47:15.046: INFO: Created: latency-svc-h2sqn Apr 17 13:47:15.084: INFO: Got endpoints: latency-svc-r66x8 [765.11768ms] Apr 17 13:47:15.100: INFO: Created: latency-svc-rdrv9 Apr 17 13:47:15.134: INFO: Got endpoints: latency-svc-js94m [644.575432ms] Apr 17 13:47:15.160: INFO: Created: latency-svc-x5ggh Apr 17 13:47:15.188: INFO: Got endpoints: latency-svc-9cl64 [828.022664ms] Apr 17 13:47:15.203: INFO: Created: latency-svc-t6n4t Apr 17 13:47:15.235: INFO: Got endpoints: latency-svc-mmpbj [848.427998ms] Apr 17 13:47:15.244: INFO: Created: latency-svc-478xm Apr 17 13:47:15.284: INFO: Got endpoints: latency-svc-zcnhx [910.597711ms] Apr 17 13:47:15.295: INFO: Created: latency-svc-jx946 Apr 17 13:47:15.335: INFO: Got endpoints: latency-svc-xd2pj [750.637235ms] Apr 17 13:47:15.346: INFO: Created: latency-svc-tw7z9 Apr 17 13:47:15.387: INFO: Got endpoints: latency-svc-k58w4 [750.613823ms] Apr 17 13:47:15.397: INFO: Created: latency-svc-kzjc6 Apr 17 13:47:15.439: INFO: Got endpoints: latency-svc-f2x7x [751.028601ms] Apr 17 13:47:15.456: INFO: Created: latency-svc-kghx8 Apr 17 13:47:15.487: INFO: Got endpoints: latency-svc-hk8c8 [751.482694ms] Apr 17 13:47:15.500: INFO: Created: latency-svc-dctr2 Apr 17 13:47:15.535: INFO: Got endpoints: latency-svc-x228b [750.195318ms] Apr 17 13:47:15.546: INFO: Created: latency-svc-6hmww Apr 17 13:47:15.584: INFO: Got endpoints: latency-svc-k6xxz [746.387957ms] Apr 17 13:47:15.594: INFO: Created: latency-svc-982tc Apr 17 13:47:15.635: INFO: Got endpoints: latency-svc-5d294 [749.587055ms] Apr 17 13:47:15.644: INFO: Created: latency-svc-ghsth Apr 17 13:47:15.685: INFO: Got endpoints: latency-svc-2hlvt [746.643714ms] Apr 17 13:47:15.706: INFO: Created: latency-svc-2b276 Apr 17 13:47:15.734: INFO: Got endpoints: latency-svc-przbb [748.88115ms] Apr 17 13:47:15.744: INFO: Created: latency-svc-6dcsp Apr 17 13:47:15.787: INFO: Got endpoints: latency-svc-h2sqn [751.420793ms] Apr 17 13:47:15.799: INFO: Created: latency-svc-ftmgl Apr 17 13:47:15.836: INFO: Got endpoints: latency-svc-rdrv9 [751.82275ms] Apr 17 13:47:15.849: INFO: Created: latency-svc-hwj8n Apr 17 13:47:15.885: INFO: Got endpoints: latency-svc-x5ggh [750.240759ms] Apr 17 13:47:15.901: INFO: Created: latency-svc-x22xf Apr 17 13:47:15.936: INFO: Got endpoints: latency-svc-t6n4t [747.939745ms] Apr 17 13:47:15.952: INFO: Created: latency-svc-4jwr6 Apr 17 13:47:15.984: INFO: Got endpoints: latency-svc-478xm [748.962903ms] Apr 17 13:47:16.002: INFO: Created: latency-svc-x8446 Apr 17 13:47:16.035: INFO: Got endpoints: latency-svc-jx946 [750.310227ms] Apr 17 13:47:16.058: INFO: Created: latency-svc-dpbkd Apr 17 13:47:16.086: INFO: Got endpoints: latency-svc-tw7z9 [751.221351ms] Apr 17 13:47:16.101: INFO: Created: latency-svc-jgfx8 Apr 17 13:47:16.138: INFO: Got endpoints: latency-svc-kzjc6 [750.474144ms] Apr 17 13:47:16.152: INFO: Created: latency-svc-q92gz Apr 17 13:47:16.184: INFO: Got endpoints: latency-svc-kghx8 [744.864016ms] Apr 17 13:47:16.199: INFO: Created: latency-svc-mv6s7 Apr 17 13:47:16.238: INFO: Got endpoints: latency-svc-dctr2 [751.166516ms] Apr 17 13:47:16.253: INFO: Created: latency-svc-8fkch Apr 17 13:47:16.286: INFO: Got endpoints: latency-svc-6hmww [750.517096ms] Apr 17 13:47:16.300: INFO: Created: latency-svc-qssdj Apr 17 13:47:16.334: INFO: Got endpoints: latency-svc-982tc [749.706305ms] Apr 17 13:47:16.349: INFO: Created: latency-svc-vqrv2 Apr 17 13:47:16.386: INFO: Got endpoints: latency-svc-ghsth [751.331232ms] Apr 17 13:47:16.402: INFO: Created: latency-svc-jrc68 Apr 17 13:47:16.438: INFO: Got endpoints: latency-svc-2b276 [753.080178ms] Apr 17 13:47:16.471: INFO: Created: latency-svc-pwb6c Apr 17 13:47:16.487: INFO: Got endpoints: latency-svc-6dcsp [753.13816ms] Apr 17 13:47:16.514: INFO: Created: latency-svc-qzfkk Apr 17 13:47:16.537: INFO: Got endpoints: latency-svc-ftmgl [749.758741ms] Apr 17 13:47:16.555: INFO: Created: latency-svc-wswdx Apr 17 13:47:16.589: INFO: Got endpoints: latency-svc-hwj8n [752.266094ms] Apr 17 13:47:16.600: INFO: Created: latency-svc-gwcrb Apr 17 13:47:16.636: INFO: Got endpoints: latency-svc-x22xf [751.4624ms] Apr 17 13:47:16.650: INFO: Created: latency-svc-86lm2 Apr 17 13:47:16.686: INFO: Got endpoints: latency-svc-4jwr6 [749.457746ms] Apr 17 13:47:16.712: INFO: Created: latency-svc-rwnxb Apr 17 13:47:16.734: INFO: Got endpoints: latency-svc-x8446 [749.911173ms] Apr 17 13:47:16.760: INFO: Created: latency-svc-7dmgf Apr 17 13:47:16.785: INFO: Got endpoints: latency-svc-dpbkd [750.266444ms] Apr 17 13:47:16.795: INFO: Created: latency-svc-5f9d7 Apr 17 13:47:16.834: INFO: Got endpoints: latency-svc-jgfx8 [747.988783ms] Apr 17 13:47:16.848: INFO: Created: latency-svc-vbqq5 Apr 17 13:47:16.884: INFO: Got endpoints: latency-svc-q92gz [746.511369ms] Apr 17 13:47:16.898: INFO: Created: latency-svc-p9h7j Apr 17 13:47:16.935: INFO: Got endpoints: latency-svc-mv6s7 [750.984452ms] Apr 17 13:47:16.948: INFO: Created: latency-svc-f2pzh Apr 17 13:47:16.984: INFO: Got endpoints: latency-svc-8fkch [745.835652ms] Apr 17 13:47:16.999: INFO: Created: latency-svc-6zgzt Apr 17 13:47:17.034: INFO: Got endpoints: latency-svc-qssdj [748.198905ms] Apr 17 13:47:17.044: INFO: Created: latency-svc-4bpwl Apr 17 13:47:17.085: INFO: Got endpoints: latency-svc-vqrv2 [750.511575ms] Apr 17 13:47:17.096: INFO: Created: latency-svc-869lg Apr 17 13:47:17.136: INFO: Got endpoints: latency-svc-jrc68 [749.557751ms] Apr 17 13:47:17.146: INFO: Created: latency-svc-9r9zv Apr 17 13:47:17.187: INFO: Got endpoints: latency-svc-pwb6c [749.653147ms] Apr 17 13:47:17.196: INFO: Created: latency-svc-ljmj6 Apr 17 13:47:17.234: INFO: Got endpoints: latency-svc-qzfkk [746.857658ms] Apr 17 13:47:17.243: INFO: Created: latency-svc-fmtbs Apr 17 13:47:17.284: INFO: Got endpoints: latency-svc-wswdx [747.908365ms] Apr 17 13:47:17.297: INFO: Created: latency-svc-q26jv Apr 17 13:47:17.340: INFO: Got endpoints: latency-svc-gwcrb [751.339289ms] Apr 17 13:47:17.350: INFO: Created: latency-svc-6twd8 Apr 17 13:47:17.385: INFO: Got endpoints: latency-svc-86lm2 [748.454314ms] Apr 17 13:47:17.396: INFO: Created: latency-svc-pss5w Apr 17 13:47:17.438: INFO: Got endpoints: latency-svc-rwnxb [752.157172ms] Apr 17 13:47:17.455: INFO: Created: latency-svc-q9f2j Apr 17 13:47:17.487: INFO: Got endpoints: latency-svc-7dmgf [753.033768ms] Apr 17 13:47:17.505: INFO: Created: latency-svc-vz7s8 Apr 17 13:47:17.537: INFO: Got endpoints: latency-svc-5f9d7 [751.442188ms] Apr 17 13:47:17.554: INFO: Created: latency-svc-4fcl6 Apr 17 13:47:17.584: INFO: Got endpoints: latency-svc-vbqq5 [749.881968ms] Apr 17 13:47:17.598: INFO: Created: latency-svc-kvjjx Apr 17 13:47:17.634: INFO: Got endpoints: latency-svc-p9h7j [749.538836ms] Apr 17 13:47:17.644: INFO: Created: latency-svc-tj2n4 Apr 17 13:47:17.687: INFO: Got endpoints: latency-svc-f2pzh [751.815023ms] Apr 17 13:47:17.703: INFO: Created: latency-svc-qn98t Apr 17 13:47:17.735: INFO: Got endpoints: latency-svc-6zgzt [750.579341ms] Apr 17 13:47:17.747: INFO: Created: latency-svc-vkflk Apr 17 13:47:17.787: INFO: Got endpoints: latency-svc-4bpwl [752.87003ms] Apr 17 13:47:17.808: INFO: Created: latency-svc-nncj2 Apr 17 13:47:17.837: INFO: Got endpoints: latency-svc-869lg [751.907844ms] Apr 17 13:47:17.847: INFO: Created: latency-svc-qz7d2 Apr 17 13:47:17.886: INFO: Got endpoints: latency-svc-9r9zv [749.653693ms] Apr 17 13:47:17.896: INFO: Created: latency-svc-s7m75 Apr 17 13:47:17.936: INFO: Got endpoints: latency-svc-ljmj6 [748.421548ms] Apr 17 13:47:17.945: INFO: Created: latency-svc-ljqsd Apr 17 13:47:17.987: INFO: Got endpoints: latency-svc-fmtbs [752.92575ms] Apr 17 13:47:18.000: INFO: Created: latency-svc-8bzwt Apr 17 13:47:18.037: INFO: Got endpoints: latency-svc-q26jv [752.785951ms] Apr 17 13:47:18.051: INFO: Created: latency-svc-97crt Apr 17 13:47:18.086: INFO: Got endpoints: latency-svc-6twd8 [746.034131ms] Apr 17 13:47:18.097: INFO: Created: latency-svc-lqdrx Apr 17 13:47:18.133: INFO: Got endpoints: latency-svc-pss5w [748.617107ms] Apr 17 13:47:18.145: INFO: Created: latency-svc-rgb5x Apr 17 13:47:18.192: INFO: Got endpoints: latency-svc-q9f2j [754.287185ms] Apr 17 13:47:18.203: INFO: Created: latency-svc-z98kv Apr 17 13:47:18.235: INFO: Got endpoints: latency-svc-vz7s8 [747.511089ms] Apr 17 13:47:18.246: INFO: Created: latency-svc-v4nsp Apr 17 13:47:18.297: INFO: Got endpoints: latency-svc-4fcl6 [759.917355ms] Apr 17 13:47:18.308: INFO: Created: latency-svc-plc4t Apr 17 13:47:18.334: INFO: Got endpoints: latency-svc-kvjjx [749.934187ms] Apr 17 13:47:18.345: INFO: Created: latency-svc-pnh24 Apr 17 13:47:18.384: INFO: Got endpoints: latency-svc-tj2n4 [750.422711ms] Apr 17 13:47:18.397: INFO: Created: latency-svc-nqv7n Apr 17 13:47:18.439: INFO: Got endpoints: latency-svc-qn98t [752.02157ms] Apr 17 13:47:18.472: INFO: Created: latency-svc-phfw6 Apr 17 13:47:18.489: INFO: Got endpoints: latency-svc-vkflk [753.922456ms] Apr 17 13:47:18.510: INFO: Created: latency-svc-p5x6d Apr 17 13:47:18.554: INFO: Got endpoints: latency-svc-nncj2 [766.525939ms] Apr 17 13:47:18.586: INFO: Created: latency-svc-g6hlt Apr 17 13:47:18.638: INFO: Got endpoints: latency-svc-qz7d2 [801.048974ms] Apr 17 13:47:18.651: INFO: Created: latency-svc-tqm7v Apr 17 13:47:18.693: INFO: Got endpoints: latency-svc-s7m75 [807.765162ms] Apr 17 13:47:18.714: INFO: Created: latency-svc-6k8fn Apr 17 13:47:18.736: INFO: Got endpoints: latency-svc-ljqsd [800.120815ms] Apr 17 13:47:18.753: INFO: Created: latency-svc-p47s5 Apr 17 13:47:18.788: INFO: Got endpoints: latency-svc-8bzwt [800.343137ms] Apr 17 13:47:18.796: INFO: Created: latency-svc-89r6z Apr 17 13:47:18.837: INFO: Got endpoints: latency-svc-97crt [799.187485ms] Apr 17 13:47:18.858: INFO: Created: latency-svc-tmhvw Apr 17 13:47:18.888: INFO: Got endpoints: latency-svc-lqdrx [801.285478ms] Apr 17 13:47:18.898: INFO: Created: latency-svc-sdmq2 Apr 17 13:47:18.937: INFO: Got endpoints: latency-svc-rgb5x [803.376502ms] Apr 17 13:47:18.948: INFO: Created: latency-svc-dfzv7 Apr 17 13:47:18.984: INFO: Got endpoints: latency-svc-z98kv [792.086483ms] Apr 17 13:47:18.996: INFO: Created: latency-svc-qxl4v Apr 17 13:47:19.036: INFO: Got endpoints: latency-svc-v4nsp [800.887254ms] Apr 17 13:47:19.049: INFO: Created: latency-svc-655gw Apr 17 13:47:19.085: INFO: Got endpoints: latency-svc-plc4t [788.492578ms] Apr 17 13:47:19.096: INFO: Created: latency-svc-qzbfv Apr 17 13:47:19.134: INFO: Got endpoints: latency-svc-pnh24 [799.643855ms] Apr 17 13:47:19.155: INFO: Created: latency-svc-fs5w9 Apr 17 13:47:19.186: INFO: Got endpoints: latency-svc-nqv7n [800.482315ms] Apr 17 13:47:19.200: INFO: Created: latency-svc-qzbxn Apr 17 13:47:19.234: INFO: Got endpoints: latency-svc-phfw6 [794.876874ms] Apr 17 13:47:19.245: INFO: Created: latency-svc-8w52j Apr 17 13:47:19.285: INFO: Got endpoints: latency-svc-p5x6d [795.723929ms] Apr 17 13:47:19.295: INFO: Created: latency-svc-fqh2m Apr 17 13:47:19.334: INFO: Got endpoints: latency-svc-g6hlt [780.523686ms] Apr 17 13:47:19.344: INFO: Created: latency-svc-8xmbv Apr 17 13:47:19.385: INFO: Got endpoints: latency-svc-tqm7v [747.774978ms] Apr 17 13:47:19.397: INFO: Created: latency-svc-p66rc Apr 17 13:47:19.436: INFO: Got endpoints: latency-svc-6k8fn [742.110442ms] Apr 17 13:47:19.455: INFO: Created: latency-svc-lpwdb Apr 17 13:47:19.486: INFO: Got endpoints: latency-svc-p47s5 [749.329132ms] Apr 17 13:47:19.506: INFO: Created: latency-svc-gkzth Apr 17 13:47:19.536: INFO: Got endpoints: latency-svc-89r6z [748.537197ms] Apr 17 13:47:19.563: INFO: Created: latency-svc-p4xht Apr 17 13:47:19.587: INFO: Got endpoints: latency-svc-tmhvw [749.982637ms] Apr 17 13:47:19.600: INFO: Created: latency-svc-zvn8z Apr 17 13:47:19.634: INFO: Got endpoints: latency-svc-sdmq2 [746.627575ms] Apr 17 13:47:19.646: INFO: Created: latency-svc-bfcjf Apr 17 13:47:19.686: INFO: Got endpoints: latency-svc-dfzv7 [748.89086ms] Apr 17 13:47:19.696: INFO: Created: latency-svc-6xfq6 Apr 17 13:47:19.735: INFO: Got endpoints: latency-svc-qxl4v [750.568391ms] Apr 17 13:47:19.744: INFO: Created: latency-svc-ghvlq Apr 17 13:47:19.784: INFO: Got endpoints: latency-svc-655gw [748.115523ms] Apr 17 13:47:19.797: INFO: Created: latency-svc-sth7t Apr 17 13:47:19.834: INFO: Got endpoints: latency-svc-qzbfv [748.272505ms] Apr 17 13:47:19.843: INFO: Created: latency-svc-zds56 Apr 17 13:47:19.885: INFO: Got endpoints: latency-svc-fs5w9 [750.680258ms] Apr 17 13:47:19.896: INFO: Created: latency-svc-95wps Apr 17 13:47:19.934: INFO: Got endpoints: latency-svc-qzbxn [748.187836ms] Apr 17 13:47:19.946: INFO: Created: latency-svc-j6f26 Apr 17 13:47:19.986: INFO: Got endpoints: latency-svc-8w52j [751.81873ms] Apr 17 13:47:20.001: INFO: Created: latency-svc-5tl9m Apr 17 13:47:20.036: INFO: Got endpoints: latency-svc-fqh2m [751.621054ms] Apr 17 13:47:20.046: INFO: Created: latency-svc-hr6nw Apr 17 13:47:20.085: INFO: Got endpoints: latency-svc-8xmbv [750.629168ms] Apr 17 13:47:20.095: INFO: Created: latency-svc-gv28s Apr 17 13:47:20.137: INFO: Got endpoints: latency-svc-p66rc [751.302007ms] Apr 17 13:47:20.147: INFO: Created: latency-svc-xzw2n Apr 17 13:47:20.185: INFO: Got endpoints: latency-svc-lpwdb [749.406714ms] Apr 17 13:47:20.198: INFO: Created: latency-svc-v9wgr Apr 17 13:47:20.235: INFO: Got endpoints: latency-svc-gkzth [749.213282ms] Apr 17 13:47:20.244: INFO: Created: latency-svc-rz8dr Apr 17 13:47:20.286: INFO: Got endpoints: latency-svc-p4xht [749.35842ms] Apr 17 13:47:20.296: INFO: Created: latency-svc-57526 Apr 17 13:47:20.334: INFO: Got endpoints: latency-svc-zvn8z [747.128434ms] Apr 17 13:47:20.344: INFO: Created: latency-svc-l4q8m Apr 17 13:47:20.388: INFO: Got endpoints: latency-svc-bfcjf [753.318891ms] Apr 17 13:47:20.397: INFO: Created: latency-svc-wcjcg Apr 17 13:47:20.437: INFO: Got endpoints: latency-svc-6xfq6 [751.18854ms] Apr 17 13:47:20.455: INFO: Created: latency-svc-624k8 Apr 17 13:47:20.493: INFO: Got endpoints: latency-svc-ghvlq [757.974619ms] Apr 17 13:47:20.511: INFO: Created: latency-svc-gbkxh Apr 17 13:47:20.551: INFO: Got endpoints: latency-svc-sth7t [766.750175ms] Apr 17 13:47:20.571: INFO: Created: latency-svc-sfmgd Apr 17 13:47:20.588: INFO: Got endpoints: latency-svc-zds56 [754.032979ms] Apr 17 13:47:20.616: INFO: Created: latency-svc-nr7nf Apr 17 13:47:20.634: INFO: Got endpoints: latency-svc-95wps [749.288435ms] Apr 17 13:47:20.646: INFO: Created: latency-svc-ldql6 Apr 17 13:47:20.689: INFO: Got endpoints: latency-svc-j6f26 [754.173839ms] Apr 17 13:47:20.702: INFO: Created: latency-svc-89fxt Apr 17 13:47:20.736: INFO: Got endpoints: latency-svc-5tl9m [750.035405ms] Apr 17 13:47:20.751: INFO: Created: latency-svc-nw8qj Apr 17 13:47:20.785: INFO: Got endpoints: latency-svc-hr6nw [748.167642ms] Apr 17 13:47:20.795: INFO: Created: latency-svc-j9zpr Apr 17 13:47:20.835: INFO: Got endpoints: latency-svc-gv28s [749.366298ms] Apr 17 13:47:20.847: INFO: Created: latency-svc-nwf97 Apr 17 13:47:20.887: INFO: Got endpoints: latency-svc-xzw2n [750.226218ms] Apr 17 13:47:20.898: INFO: Created: latency-svc-sq4r2 Apr 17 13:47:20.934: INFO: Got endpoints: latency-svc-v9wgr [748.624285ms] Apr 17 13:47:20.946: INFO: Created: latency-svc-2t8k8 Apr 17 13:47:20.986: INFO: Got endpoints: latency-svc-rz8dr [750.743019ms] Apr 17 13:47:20.998: INFO: Created: latency-svc-bwl4x Apr 17 13:47:21.034: INFO: Got endpoints: latency-svc-57526 [748.37815ms] Apr 17 13:47:21.047: INFO: Created: latency-svc-6fkmr Apr 17 13:47:21.085: INFO: Got endpoints: latency-svc-l4q8m [750.645648ms] Apr 17 13:47:21.103: INFO: Created: latency-svc-pc4tb Apr 17 13:47:21.135: INFO: Got endpoints: latency-svc-wcjcg [747.009814ms] Apr 17 13:47:21.155: INFO: Created: latency-svc-hp2fx Apr 17 13:47:21.184: INFO: Got endpoints: latency-svc-624k8 [747.226803ms] Apr 17 13:47:21.207: INFO: Created: latency-svc-jz5c8 Apr 17 13:47:21.234: INFO: Got endpoints: latency-svc-gbkxh [741.216668ms] Apr 17 13:47:21.245: INFO: Created: latency-svc-wsk6q Apr 17 13:47:21.285: INFO: Got endpoints: latency-svc-sfmgd [733.387548ms] Apr 17 13:47:21.295: INFO: Created: latency-svc-cxmj6 Apr 17 13:47:21.334: INFO: Got endpoints: latency-svc-nr7nf [746.043171ms] Apr 17 13:47:21.345: INFO: Created: latency-svc-hgqbs Apr 17 13:47:21.386: INFO: Got endpoints: latency-svc-ldql6 [751.422113ms] Apr 17 13:47:21.396: INFO: Created: latency-svc-tt6wv Apr 17 13:47:21.434: INFO: Got endpoints: latency-svc-89fxt [745.745994ms] Apr 17 13:47:21.448: INFO: Created: latency-svc-v9lwb Apr 17 13:47:21.489: INFO: Got endpoints: latency-svc-nw8qj [752.761935ms] Apr 17 13:47:21.506: INFO: Created: latency-svc-c2zhw Apr 17 13:47:21.536: INFO: Got endpoints: latency-svc-j9zpr [751.263594ms] Apr 17 13:47:21.558: INFO: Created: latency-svc-qsjfc Apr 17 13:47:21.587: INFO: Got endpoints: latency-svc-nwf97 [752.19931ms] Apr 17 13:47:21.601: INFO: Created: latency-svc-r7nsb Apr 17 13:47:21.634: INFO: Got endpoints: latency-svc-sq4r2 [746.762685ms] Apr 17 13:47:21.645: INFO: Created: latency-svc-c8kp5 Apr 17 13:47:21.687: INFO: Got endpoints: latency-svc-2t8k8 [752.138265ms] Apr 17 13:47:21.697: INFO: Created: latency-svc-877bk Apr 17 13:47:21.737: INFO: Got endpoints: latency-svc-bwl4x [750.8359ms] Apr 17 13:47:21.746: INFO: Created: latency-svc-rtl27 Apr 17 13:47:21.784: INFO: Got endpoints: latency-svc-6fkmr [750.080947ms] Apr 17 13:47:21.835: INFO: Got endpoints: latency-svc-pc4tb [750.678221ms] Apr 17 13:47:21.885: INFO: Got endpoints: latency-svc-hp2fx [749.731932ms] Apr 17 13:47:21.934: INFO: Got endpoints: latency-svc-jz5c8 [749.496684ms] Apr 17 13:47:21.984: INFO: Got endpoints: latency-svc-wsk6q [749.096083ms] Apr 17 13:47:22.036: INFO: Got endpoints: latency-svc-cxmj6 [751.152892ms] Apr 17 13:47:22.085: INFO: Got endpoints: latency-svc-hgqbs [749.944622ms] Apr 17 13:47:22.136: INFO: Got endpoints: latency-svc-tt6wv [750.242983ms] Apr 17 13:47:22.185: INFO: Got endpoints: latency-svc-v9lwb [750.285834ms] Apr 17 13:47:22.238: INFO: Got endpoints: latency-svc-c2zhw [749.178523ms] Apr 17 13:47:22.289: INFO: Got endpoints: latency-svc-qsjfc [752.775532ms] Apr 17 13:47:22.336: INFO: Got endpoints: latency-svc-r7nsb [748.998589ms] Apr 17 13:47:22.385: INFO: Got endpoints: latency-svc-c8kp5 [751.140115ms] Apr 17 13:47:22.439: INFO: Got endpoints: latency-svc-877bk [752.284286ms] Apr 17 13:47:22.487: INFO: Got endpoints: latency-svc-rtl27 [750.212315ms] Apr 17 13:47:22.488: INFO: Latencies: [18.833686ms 23.860235ms 36.417779ms 44.507849ms 55.978252ms 70.140546ms 81.007636ms 88.651451ms 97.302282ms 114.96319ms 121.975359ms 135.695116ms 138.438322ms 148.533251ms 149.706557ms 150.481816ms 151.113027ms 151.421331ms 155.012929ms 155.446018ms 155.589671ms 156.277356ms 156.361121ms 157.646461ms 158.116151ms 158.176495ms 159.016812ms 160.212898ms 161.725307ms 171.886252ms 172.344845ms 172.708762ms 177.491354ms 185.401708ms 193.676649ms 203.723364ms 232.258249ms 236.652132ms 243.299716ms 294.817446ms 327.055882ms 361.820284ms 400.734405ms 434.579557ms 438.903213ms 467.521588ms 497.661114ms 505.471325ms 544.35098ms 577.447986ms 636.579041ms 644.575432ms 705.431366ms 733.387548ms 741.216668ms 742.110442ms 744.864016ms 745.745994ms 745.835652ms 746.034131ms 746.043171ms 746.387957ms 746.511369ms 746.627575ms 746.643714ms 746.762685ms 746.857658ms 747.009814ms 747.128434ms 747.226803ms 747.511089ms 747.774978ms 747.908365ms 747.939745ms 747.988783ms 748.115523ms 748.167642ms 748.187836ms 748.198905ms 748.272505ms 748.37815ms 748.421548ms 748.454314ms 748.537197ms 748.617107ms 748.624285ms 748.88115ms 748.89086ms 748.962903ms 748.998589ms 749.096083ms 749.178523ms 749.213282ms 749.288435ms 749.329132ms 749.35842ms 749.366298ms 749.406714ms 749.457746ms 749.496684ms 749.538836ms 749.557751ms 749.587055ms 749.653147ms 749.653693ms 749.706305ms 749.731932ms 749.758741ms 749.881968ms 749.911173ms 749.934187ms 749.944622ms 749.982637ms 750.035405ms 750.080947ms 750.195318ms 750.212315ms 750.226218ms 750.240759ms 750.242983ms 750.266444ms 750.285834ms 750.310227ms 750.422711ms 750.474144ms 750.511575ms 750.517096ms 750.568391ms 750.579341ms 750.613823ms 750.629168ms 750.637235ms 750.645648ms 750.678221ms 750.680258ms 750.743019ms 750.8359ms 750.984452ms 751.028601ms 751.140115ms 751.152892ms 751.166516ms 751.18854ms 751.221351ms 751.263594ms 751.302007ms 751.331232ms 751.339289ms 751.420793ms 751.422113ms 751.442188ms 751.4624ms 751.482694ms 751.621054ms 751.815023ms 751.81873ms 751.82275ms 751.907844ms 752.02157ms 752.138265ms 752.157172ms 752.19931ms 752.266094ms 752.284286ms 752.761935ms 752.775532ms 752.785951ms 752.87003ms 752.92575ms 753.033768ms 753.080178ms 753.13816ms 753.318891ms 753.922456ms 754.032979ms 754.173839ms 754.287185ms 757.974619ms 759.917355ms 765.11768ms 766.525939ms 766.750175ms 780.523686ms 788.492578ms 792.086483ms 794.876874ms 795.723929ms 799.187485ms 799.643855ms 800.120815ms 800.343137ms 800.482315ms 800.887254ms 801.048974ms 801.285478ms 803.376502ms 807.765162ms 828.022664ms 848.427998ms 910.597711ms] Apr 17 13:47:22.488: INFO: 50 %ile: 749.538836ms Apr 17 13:47:22.488: INFO: 90 %ile: 766.525939ms Apr 17 13:47:22.488: INFO: 99 %ile: 848.427998ms Apr 17 13:47:22.488: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:22.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-8243" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":38,"skipped":644,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:22.551: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:26.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-479" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":668,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:20.683: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status captures replicaset creation �[1mSTEP�[0m: Deleting a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:31.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4208" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:26.624: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 17 13:47:26.661: INFO: The status of Pod pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:47:28.665: INFO: The status of Pod pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Apr 17 13:47:29.196: INFO: Successfully updated pod "pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05" Apr 17 13:47:29.196: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05" in namespace "pods-5861" to be "terminated due to deadline exceeded" Apr 17 13:47:29.217: INFO: Pod "pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05": Phase="Running", Reason="", readiness=true. Elapsed: 20.209656ms Apr 17 13:47:31.221: INFO: Pod "pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05": Phase="Running", Reason="", readiness=true. Elapsed: 2.024330591s Apr 17 13:47:33.224: INFO: Pod "pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05": Phase="Failed", Reason="DeadlineExceeded", readiness=true. Elapsed: 4.027318961s Apr 17 13:47:33.224: INFO: Pod "pod-update-activedeadlineseconds-e8784265-fc30-4b89-9151-d376fc4afa05" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:33.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5861" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":672,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:33.247: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-022e059d-6163-42e6-a849-8a39e083750d �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-c33744f6-e42d-44cd-929b-1152bdf06767 �[1mSTEP�[0m: Creating the pod Apr 17 13:47:33.288: INFO: The status of Pod pod-projected-configmaps-85d0206d-e172-4c0e-afcb-fea9efb40ed8 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:47:35.292: INFO: The status of Pod pod-projected-configmaps-85d0206d-e172-4c0e-afcb-fea9efb40ed8 is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-022e059d-6163-42e6-a849-8a39e083750d �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-c33744f6-e42d-44cd-929b-1152bdf06767 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-bfd6d395-4bbe-44c8-8b09-836a948060e3 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:37.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3105" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":685,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":16,"skipped":266,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:31.758: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 17 13:47:31.802: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:47:33.806: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 17 13:47:33.816: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:47:35.820: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 17 13:47:35.828: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 13:47:35.832: INFO: Pod pod-with-prestop-http-hook still exists Apr 17 13:47:37.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 13:47:37.836: INFO: Pod pod-with-prestop-http-hook still exists Apr 17 13:47:39.833: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 17 13:47:39.837: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:39.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-4886" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":266,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:39.867: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-d015037f-9f63-426a-818f-3af6e184babc �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:47:39.901: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d1b49ec3-0185-445e-8ae3-33b6f28cd600" in namespace "projected-5995" to be "Succeeded or Failed" Apr 17 13:47:39.904: INFO: Pod "pod-projected-configmaps-d1b49ec3-0185-445e-8ae3-33b6f28cd600": Phase="Pending", Reason="", readiness=false. Elapsed: 2.368163ms Apr 17 13:47:41.907: INFO: Pod "pod-projected-configmaps-d1b49ec3-0185-445e-8ae3-33b6f28cd600": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006073653s �[1mSTEP�[0m: Saw pod success Apr 17 13:47:41.908: INFO: Pod "pod-projected-configmaps-d1b49ec3-0185-445e-8ae3-33b6f28cd600" satisfied condition "Succeeded or Failed" Apr 17 13:47:41.910: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-projected-configmaps-d1b49ec3-0185-445e-8ae3-33b6f28cd600 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:47:41.924: INFO: Waiting for pod pod-projected-configmaps-d1b49ec3-0185-445e-8ae3-33b6f28cd600 to disappear Apr 17 13:47:41.927: INFO: Pod pod-projected-configmaps-d1b49ec3-0185-445e-8ae3-33b6f28cd600 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:41.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5995" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":277,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:41.956: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:47:41.992: INFO: Waiting up to 5m0s for pod "busybox-user-65534-3c23fb9b-1bcc-4bad-86b8-7399b69e207c" in namespace "security-context-test-2809" to be "Succeeded or Failed" Apr 17 13:47:41.995: INFO: Pod "busybox-user-65534-3c23fb9b-1bcc-4bad-86b8-7399b69e207c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670666ms Apr 17 13:47:43.999: INFO: Pod "busybox-user-65534-3c23fb9b-1bcc-4bad-86b8-7399b69e207c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007022695s Apr 17 13:47:43.999: INFO: Pod "busybox-user-65534-3c23fb9b-1bcc-4bad-86b8-7399b69e207c" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:43.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-2809" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":294,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:37.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a ResourceQuota with terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not terminating scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a long running pod �[1mSTEP�[0m: Ensuring resource quota with not terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a terminating pod �[1mSTEP�[0m: Ensuring resource quota with terminating scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not terminating scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:53.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4748" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":-1,"completed":42,"skipped":695,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:53.513: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap configmap-2506/configmap-test-a32e81a2-c4ef-4843-8f6c-3d0c5a933d40 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:47:53.554: INFO: Waiting up to 5m0s for pod "pod-configmaps-1865a60f-05bc-47cb-9a1d-34e0039cf72f" in namespace "configmap-2506" to be "Succeeded or Failed" Apr 17 13:47:53.557: INFO: Pod "pod-configmaps-1865a60f-05bc-47cb-9a1d-34e0039cf72f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.652995ms Apr 17 13:47:55.561: INFO: Pod "pod-configmaps-1865a60f-05bc-47cb-9a1d-34e0039cf72f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006632894s �[1mSTEP�[0m: Saw pod success Apr 17 13:47:55.561: INFO: Pod "pod-configmaps-1865a60f-05bc-47cb-9a1d-34e0039cf72f" satisfied condition "Succeeded or Failed" Apr 17 13:47:55.564: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-configmaps-1865a60f-05bc-47cb-9a1d-34e0039cf72f container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:47:55.580: INFO: Waiting for pod pod-configmaps-1865a60f-05bc-47cb-9a1d-34e0039cf72f to disappear Apr 17 13:47:55.582: INFO: Pod pod-configmaps-1865a60f-05bc-47cb-9a1d-34e0039cf72f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:55.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2506" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":720,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:55.624: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Apr 17 13:47:55.662: INFO: Waiting up to 5m0s for pod "pod-45dd4940-24e5-4478-a96d-2a9cefab3ada" in namespace "emptydir-5580" to be "Succeeded or Failed" Apr 17 13:47:55.664: INFO: Pod "pod-45dd4940-24e5-4478-a96d-2a9cefab3ada": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255951ms Apr 17 13:47:57.668: INFO: Pod "pod-45dd4940-24e5-4478-a96d-2a9cefab3ada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.005705739s �[1mSTEP�[0m: Saw pod success Apr 17 13:47:57.668: INFO: Pod "pod-45dd4940-24e5-4478-a96d-2a9cefab3ada" satisfied condition "Succeeded or Failed" Apr 17 13:47:57.670: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-45dd4940-24e5-4478-a96d-2a9cefab3ada container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:47:57.683: INFO: Waiting for pod pod-45dd4940-24e5-4478-a96d-2a9cefab3ada to disappear Apr 17 13:47:57.685: INFO: Pod pod-45dd4940-24e5-4478-a96d-2a9cefab3ada no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:57.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5580" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":747,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:57.756: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-95dfb0c1-694f-48d3-b834-2aac80b969d9 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:47:57.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8033" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":45,"skipped":795,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:44.023: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:47:44.059: INFO: created pod Apr 17 13:47:44.059: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-1197" to be "Succeeded or Failed" Apr 17 13:47:44.062: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.043868ms Apr 17 13:47:46.066: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006586204s �[1mSTEP�[0m: Saw pod success Apr 17 13:47:46.066: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Apr 17 13:48:16.068: INFO: polling logs Apr 17 13:48:16.074: INFO: Pod logs: 2022/04/17 13:47:44 OK: Got token 2022/04/17 13:47:44 validating with in-cluster discovery 2022/04/17 13:47:44 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/04/17 13:47:44 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-1197:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650203864, NotBefore:1650203264, IssuedAt:1650203264, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1197", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"66e113a3-6a2a-4428-bb54-ce8bcc98a283"}}} 2022/04/17 13:47:44 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/04/17 13:47:44 OK: Validated signature on JWT 2022/04/17 13:47:44 OK: Got valid claims from token! 2022/04/17 13:47:44 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-1197:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650203864, NotBefore:1650203264, IssuedAt:1650203264, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-1197", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"66e113a3-6a2a-4428-bb54-ce8bcc98a283"}}} Apr 17 13:48:16.074: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:16.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-1197" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":20,"skipped":302,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:47:57.830: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a watch on configmaps with label A �[1mSTEP�[0m: creating a watch on configmaps with label B �[1mSTEP�[0m: creating a watch on configmaps with label A or B �[1mSTEP�[0m: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 17 13:47:57.863: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8588 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:47:57.863: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8588 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A and ensuring the correct watchers observe the notification Apr 17 13:47:57.869: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8589 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:47:57.869: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8589 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A again and ensuring the correct watchers observe the notification Apr 17 13:47:57.875: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8590 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:47:57.875: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8590 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap A and ensuring the correct watchers observe the notification Apr 17 13:47:57.879: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8591 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:47:57.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-5102 a483b875-eb9f-4ecd-8e98-5c71475d618b 8591 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 17 13:47:57.883: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5102 6b6375e0-4910-45a7-b041-a3832b054c5a 8592 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:47:57.883: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5102 6b6375e0-4910-45a7-b041-a3832b054c5a 8592 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap B and ensuring the correct watchers observe the notification Apr 17 13:48:07.889: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5102 6b6375e0-4910-45a7-b041-a3832b054c5a 8651 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:48:07.889: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-5102 6b6375e0-4910-45a7-b041-a3832b054c5a 8651 0 2022-04-17 13:47:57 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-17 13:47:57 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:17.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-5102" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":46,"skipped":831,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:17.903: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:48:17.941: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d84c3dc0-b932-49d6-ac09-cb3dc8e287dd" in namespace "downward-api-8325" to be "Succeeded or Failed" Apr 17 13:48:17.943: INFO: Pod "downwardapi-volume-d84c3dc0-b932-49d6-ac09-cb3dc8e287dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493585ms Apr 17 13:48:19.948: INFO: Pod "downwardapi-volume-d84c3dc0-b932-49d6-ac09-cb3dc8e287dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007368905s �[1mSTEP�[0m: Saw pod success Apr 17 13:48:19.948: INFO: Pod "downwardapi-volume-d84c3dc0-b932-49d6-ac09-cb3dc8e287dd" satisfied condition "Succeeded or Failed" Apr 17 13:48:19.953: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod downwardapi-volume-d84c3dc0-b932-49d6-ac09-cb3dc8e287dd container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:48:19.972: INFO: Waiting for pod downwardapi-volume-d84c3dc0-b932-49d6-ac09-cb3dc8e287dd to disappear Apr 17 13:48:19.977: INFO: Pod downwardapi-volume-d84c3dc0-b932-49d6-ac09-cb3dc8e287dd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:19.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8325" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":831,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:16.199: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:16.223: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-1254 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:22.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-8249" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:22.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-1254" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":21,"skipped":396,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:22.360: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:48:22.393: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:23.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-8374" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":22,"skipped":428,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:23.427: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Apr 17 13:48:23.458: INFO: Waiting up to 5m0s for pod "pod-f66e4c7e-a7e6-4e2e-a873-9730f6cfadf5" in namespace "emptydir-7210" to be "Succeeded or Failed" Apr 17 13:48:23.460: INFO: Pod "pod-f66e4c7e-a7e6-4e2e-a873-9730f6cfadf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041352ms Apr 17 13:48:25.465: INFO: Pod "pod-f66e4c7e-a7e6-4e2e-a873-9730f6cfadf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006447503s �[1mSTEP�[0m: Saw pod success Apr 17 13:48:25.465: INFO: Pod "pod-f66e4c7e-a7e6-4e2e-a873-9730f6cfadf5" satisfied condition "Succeeded or Failed" Apr 17 13:48:25.468: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-f66e4c7e-a7e6-4e2e-a873-9730f6cfadf5 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:48:25.482: INFO: Waiting for pod pod-f66e4c7e-a7e6-4e2e-a873-9730f6cfadf5 to disappear Apr 17 13:48:25.484: INFO: Pod pod-f66e4c7e-a7e6-4e2e-a873-9730f6cfadf5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:25.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7210" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":431,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:20.064: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1571 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 17 13:48:20.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6693 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Apr 17 13:48:20.184: INFO: stderr: "" Apr 17 13:48:20.185: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Apr 17 13:48:25.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6693 get pod e2e-test-httpd-pod -o json' Apr 17 13:48:25.304: INFO: stderr: "" Apr 17 13:48:25.304: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2022-04-17T13:48:20Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-6693\",\n \"resourceVersion\": \"8741\",\n \"uid\": \"9d514b6e-f25f-41bc-9255-30798153b9cf\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-rqbtq\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-rqbtq\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-17T13:48:20Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-17T13:48:21Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-17T13:48:21Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-17T13:48:20Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d89acc37300973b4501b1d7b200a4aebcef1c7b981ea4853bbb81686f7bb502c\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-04-17T13:48:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.7\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.3.34\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.3.34\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-04-17T13:48:20Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Apr 17 13:48:25.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6693 replace -f -' Apr 17 13:48:26.236: INFO: stderr: "" Apr 17 13:48:26.236: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1575 Apr 17 13:48:26.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6693 delete pods e2e-test-httpd-pod' Apr 17 13:48:27.648: INFO: stderr: "" Apr 17 13:48:27.648: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:27.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6693" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":48,"skipped":891,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:27.686: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 17 13:48:27.718: INFO: Waiting up to 5m0s for pod "pod-d506a9d4-29cc-45a3-bc74-ebd235888bce" in namespace "emptydir-4440" to be "Succeeded or Failed" Apr 17 13:48:27.721: INFO: Pod "pod-d506a9d4-29cc-45a3-bc74-ebd235888bce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.904618ms Apr 17 13:48:29.724: INFO: Pod "pod-d506a9d4-29cc-45a3-bc74-ebd235888bce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006269873s �[1mSTEP�[0m: Saw pod success Apr 17 13:48:29.724: INFO: Pod "pod-d506a9d4-29cc-45a3-bc74-ebd235888bce" satisfied condition "Succeeded or Failed" Apr 17 13:48:29.727: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-d506a9d4-29cc-45a3-bc74-ebd235888bce container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:48:29.738: INFO: Waiting for pod pod-d506a9d4-29cc-45a3-bc74-ebd235888bce to disappear Apr 17 13:48:29.744: INFO: Pod pod-d506a9d4-29cc-45a3-bc74-ebd235888bce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:29.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4440" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":915,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:29.768: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: starting the proxy server Apr 17 13:48:29.796: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5161 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:29.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5161" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":50,"skipped":928,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:29.864: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:48:29.892: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Apr 17 13:48:32.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 create -f -' Apr 17 13:48:32.932: INFO: stderr: "" Apr 17 13:48:32.932: INFO: stdout: "e2e-test-crd-publish-openapi-6684-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 17 13:48:32.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 delete e2e-test-crd-publish-openapi-6684-crds test-foo' Apr 17 13:48:33.003: INFO: stderr: "" Apr 17 13:48:33.003: INFO: stdout: "e2e-test-crd-publish-openapi-6684-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 17 13:48:33.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 apply -f -' Apr 17 13:48:33.196: INFO: stderr: "" Apr 17 13:48:33.196: INFO: stdout: "e2e-test-crd-publish-openapi-6684-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 17 13:48:33.196: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 delete e2e-test-crd-publish-openapi-6684-crds test-foo' Apr 17 13:48:33.266: INFO: stderr: "" Apr 17 13:48:33.266: INFO: stdout: "e2e-test-crd-publish-openapi-6684-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Apr 17 13:48:33.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 create -f -' Apr 17 13:48:33.439: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 17 13:48:33.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 create -f -' Apr 17 13:48:33.601: INFO: rc: 1 Apr 17 13:48:33.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 apply -f -' Apr 17 13:48:33.768: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Apr 17 13:48:33.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 create -f -' Apr 17 13:48:33.934: INFO: rc: 1 Apr 17 13:48:33.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 --namespace=crd-publish-openapi-5957 apply -f -' Apr 17 13:48:34.095: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Apr 17 13:48:34.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 explain e2e-test-crd-publish-openapi-6684-crds' Apr 17 13:48:34.266: INFO: stderr: "" Apr 17 13:48:34.266: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6684-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Apr 17 13:48:34.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 explain e2e-test-crd-publish-openapi-6684-crds.metadata' Apr 17 13:48:34.447: INFO: stderr: "" Apr 17 13:48:34.447: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6684-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 17 13:48:34.448: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 explain e2e-test-crd-publish-openapi-6684-crds.spec' Apr 17 13:48:34.641: INFO: stderr: "" Apr 17 13:48:34.641: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6684-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 17 13:48:34.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 explain e2e-test-crd-publish-openapi-6684-crds.spec.bars' Apr 17 13:48:34.817: INFO: stderr: "" Apr 17 13:48:34.817: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6684-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Apr 17 13:48:34.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5957 explain e2e-test-crd-publish-openapi-6684-crds.spec.bars2' Apr 17 13:48:35.016: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:37.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-5957" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":51,"skipped":928,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:37.182: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod Apr 17 13:48:37.207: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:39.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-152" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":52,"skipped":937,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:39.457: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Apr 17 13:48:39.503: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:39.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-3494" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":53,"skipped":979,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:25.520: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4588 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 17 13:48:25.550: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 13:48:25.593: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:48:27.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:29.597: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:31.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:33.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:35.598: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:37.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:39.597: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:41.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:43.596: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:48:45.597: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 13:48:45.603: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 17 13:48:45.613: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 17 13:48:45.621: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 17 13:48:47.636: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 17 13:48:47.636: INFO: Breadth first check of 192.168.2.32 on host 172.18.0.6... Apr 17 13:48:47.639: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.33:9080/dial?request=hostname&protocol=udp&host=192.168.2.32&port=8081&tries=1'] Namespace:pod-network-test-4588 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:48:47.639: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:48:47.639: INFO: ExecWithOptions: Clientset creation Apr 17 13:48:47.639: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4588/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.33%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.2.32%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:48:47.747: INFO: Waiting for responses: map[] Apr 17 13:48:47.747: INFO: reached 192.168.2.32 after 0/1 tries Apr 17 13:48:47.747: INFO: Breadth first check of 192.168.0.28 on host 172.18.0.4... Apr 17 13:48:47.750: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.33:9080/dial?request=hostname&protocol=udp&host=192.168.0.28&port=8081&tries=1'] Namespace:pod-network-test-4588 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:48:47.750: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:48:47.751: INFO: ExecWithOptions: Clientset creation Apr 17 13:48:47.751: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4588/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.33%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.0.28%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:48:47.829: INFO: Waiting for responses: map[] Apr 17 13:48:47.829: INFO: reached 192.168.0.28 after 0/1 tries Apr 17 13:48:47.829: INFO: Breadth first check of 192.168.3.36 on host 172.18.0.7... Apr 17 13:48:47.833: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.33:9080/dial?request=hostname&protocol=udp&host=192.168.3.36&port=8081&tries=1'] Namespace:pod-network-test-4588 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:48:47.833: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:48:47.834: INFO: ExecWithOptions: Clientset creation Apr 17 13:48:47.834: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4588/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.33%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.3.36%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:48:47.911: INFO: Waiting for responses: map[] Apr 17 13:48:47.911: INFO: reached 192.168.3.36 after 0/1 tries Apr 17 13:48:47.911: INFO: Breadth first check of 192.168.6.14 on host 172.18.0.5... Apr 17 13:48:47.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.33:9080/dial?request=hostname&protocol=udp&host=192.168.6.14&port=8081&tries=1'] Namespace:pod-network-test-4588 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:48:47.915: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:48:47.915: INFO: ExecWithOptions: Clientset creation Apr 17 13:48:47.915: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-4588/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.33%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.6.14%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:48:47.993: INFO: Waiting for responses: map[] Apr 17 13:48:47.993: INFO: reached 192.168.6.14 after 0/1 tries Apr 17 13:48:47.993: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:47.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4588" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":456,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:48.021: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] Replace and Patch tests [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:48:48.064: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 17 13:48:53.068: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Scaling up "test-rs" replicaset Apr 17 13:48:53.082: INFO: Updating replica set "test-rs" �[1mSTEP�[0m: patching the ReplicaSet Apr 17 13:48:53.092: INFO: observed ReplicaSet test-rs in namespace replicaset-4677 with ReadyReplicas 1, AvailableReplicas 1 Apr 17 13:48:53.107: INFO: observed ReplicaSet test-rs in namespace replicaset-4677 with ReadyReplicas 1, AvailableReplicas 1 Apr 17 13:48:53.124: INFO: observed ReplicaSet test-rs in namespace replicaset-4677 with ReadyReplicas 1, AvailableReplicas 1 Apr 17 13:48:53.128: INFO: observed ReplicaSet test-rs in namespace replicaset-4677 with ReadyReplicas 1, AvailableReplicas 1 Apr 17 13:48:54.462: INFO: observed ReplicaSet test-rs in namespace replicaset-4677 with ReadyReplicas 2, AvailableReplicas 2 Apr 17 13:48:54.853: INFO: observed Replicaset test-rs in namespace replicaset-4677 with ReadyReplicas 3 found true [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:54.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-4677" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":-1,"completed":25,"skipped":472,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:54.905: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-9012306f-25ce-4a81-acce-872399b26e5a �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:48:54.939: INFO: Waiting up to 5m0s for pod "pod-configmaps-11c29d66-541c-4f81-9fdd-4ed614af05d7" in namespace "configmap-6160" to be "Succeeded or Failed" Apr 17 13:48:54.950: INFO: Pod "pod-configmaps-11c29d66-541c-4f81-9fdd-4ed614af05d7": Phase="Pending", Reason="", readiness=false. Elapsed: 10.594942ms Apr 17 13:48:56.955: INFO: Pod "pod-configmaps-11c29d66-541c-4f81-9fdd-4ed614af05d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.015258803s �[1mSTEP�[0m: Saw pod success Apr 17 13:48:56.955: INFO: Pod "pod-configmaps-11c29d66-541c-4f81-9fdd-4ed614af05d7" satisfied condition "Succeeded or Failed" Apr 17 13:48:56.957: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-configmaps-11c29d66-541c-4f81-9fdd-4ed614af05d7 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:48:56.969: INFO: Waiting for pod pod-configmaps-11c29d66-541c-4f81-9fdd-4ed614af05d7 to disappear Apr 17 13:48:56.972: INFO: Pod pod-configmaps-11c29d66-541c-4f81-9fdd-4ed614af05d7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:48:56.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6160" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":499,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:39.528: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 17 13:48:39.557: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 17 13:48:49.435: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:48:51.605: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:01.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-2393" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":54,"skipped":982,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:01.296: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/node.k8s.io �[1mSTEP�[0m: getting /apis/node.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: watching Apr 17 13:49:01.347: INFO: starting watch �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 17 13:49:01.364: INFO: waiting for watch events with expected annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:01.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-7522" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":55,"skipped":988,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:48:56.983: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating Agnhost RC Apr 17 13:48:57.013: INFO: namespace kubectl-9781 Apr 17 13:48:57.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9781 create -f -' Apr 17 13:48:57.630: INFO: stderr: "" Apr 17 13:48:57.631: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 17 13:48:58.635: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 13:48:58.635: INFO: Found 0 / 1 Apr 17 13:48:59.634: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 13:48:59.634: INFO: Found 1 / 1 Apr 17 13:48:59.634: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 17 13:48:59.637: INFO: Selector matched 1 pods for map[app:agnhost] Apr 17 13:48:59.637: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 17 13:48:59.637: INFO: wait on agnhost-primary startup in kubectl-9781 Apr 17 13:48:59.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9781 logs agnhost-primary-mvgnr agnhost-primary' Apr 17 13:48:59.716: INFO: stderr: "" Apr 17 13:48:59.716: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Apr 17 13:48:59.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9781 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Apr 17 13:48:59.801: INFO: stderr: "" Apr 17 13:48:59.801: INFO: stdout: "service/rm2 exposed\n" Apr 17 13:48:59.806: INFO: Service rm2 in namespace kubectl-9781 found. �[1mSTEP�[0m: exposing service Apr 17 13:49:01.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9781 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Apr 17 13:49:01.907: INFO: stderr: "" Apr 17 13:49:01.907: INFO: stdout: "service/rm3 exposed\n" Apr 17 13:49:01.916: INFO: Service rm3 in namespace kubectl-9781 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:03.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9781" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":27,"skipped":500,"failed":3,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:01.404: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:49:03.448: INFO: Deleting pod "var-expansion-3f35f1a9-33e8-4090-b2c7-cdb41dd15c6c" in namespace "var-expansion-3708" Apr 17 13:49:03.454: INFO: Wait up to 5m0s for pod "var-expansion-3f35f1a9-33e8-4090-b2c7-cdb41dd15c6c" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:05.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3708" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":56,"skipped":997,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:05.490: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test override command Apr 17 13:49:05.524: INFO: Waiting up to 5m0s for pod "client-containers-1d36f65b-b4a8-4c3f-878f-d6038ea2c906" in namespace "containers-5584" to be "Succeeded or Failed" Apr 17 13:49:05.527: INFO: Pod "client-containers-1d36f65b-b4a8-4c3f-878f-d6038ea2c906": Phase="Pending", Reason="", readiness=false. Elapsed: 3.445845ms Apr 17 13:49:07.532: INFO: Pod "client-containers-1d36f65b-b4a8-4c3f-878f-d6038ea2c906": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00799505s �[1mSTEP�[0m: Saw pod success Apr 17 13:49:07.532: INFO: Pod "client-containers-1d36f65b-b4a8-4c3f-878f-d6038ea2c906" satisfied condition "Succeeded or Failed" Apr 17 13:49:07.535: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod client-containers-1d36f65b-b4a8-4c3f-878f-d6038ea2c906 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:49:07.551: INFO: Waiting for pod client-containers-1d36f65b-b4a8-4c3f-878f-d6038ea2c906 to disappear Apr 17 13:49:07.555: INFO: Pod client-containers-1d36f65b-b4a8-4c3f-878f-d6038ea2c906 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:07.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-5584" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1006,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:07.586: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Apr 17 13:49:07.625: INFO: Waiting up to 5m0s for pod "pod-ef219bc7-e0b9-42fe-860b-b963eb65fa54" in namespace "emptydir-3888" to be "Succeeded or Failed" Apr 17 13:49:07.628: INFO: Pod "pod-ef219bc7-e0b9-42fe-860b-b963eb65fa54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.416231ms Apr 17 13:49:09.632: INFO: Pod "pod-ef219bc7-e0b9-42fe-860b-b963eb65fa54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006366572s �[1mSTEP�[0m: Saw pod success Apr 17 13:49:09.632: INFO: Pod "pod-ef219bc7-e0b9-42fe-860b-b963eb65fa54" satisfied condition "Succeeded or Failed" Apr 17 13:49:09.636: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-ef219bc7-e0b9-42fe-860b-b963eb65fa54 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:49:09.654: INFO: Waiting for pod pod-ef219bc7-e0b9-42fe-860b-b963eb65fa54 to disappear Apr 17 13:49:09.657: INFO: Pod pod-ef219bc7-e0b9-42fe-860b-b963eb65fa54 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:09.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1023,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:09.673: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-1cdc6547-98b3-48aa-84ce-f9f0903eda6c �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 17 13:49:09.715: INFO: Waiting up to 5m0s for pod "pod-secrets-dce5f541-c895-4d90-a0a6-f5965e45dcbf" in namespace "secrets-8058" to be "Succeeded or Failed" Apr 17 13:49:09.719: INFO: Pod "pod-secrets-dce5f541-c895-4d90-a0a6-f5965e45dcbf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.38861ms Apr 17 13:49:11.725: INFO: Pod "pod-secrets-dce5f541-c895-4d90-a0a6-f5965e45dcbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00950973s �[1mSTEP�[0m: Saw pod success Apr 17 13:49:11.725: INFO: Pod "pod-secrets-dce5f541-c895-4d90-a0a6-f5965e45dcbf" satisfied condition "Succeeded or Failed" Apr 17 13:49:11.728: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-secrets-dce5f541-c895-4d90-a0a6-f5965e45dcbf container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:49:11.740: INFO: Waiting for pod pod-secrets-dce5f541-c895-4d90-a0a6-f5965e45dcbf to disappear Apr 17 13:49:11.742: INFO: Pod pod-secrets-dce5f541-c895-4d90-a0a6-f5965e45dcbf no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:11.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8058" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1028,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:11.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:22.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-5690" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":60,"skipped":1050,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:22.905: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-9936 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-9936 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-9936 I0417 13:49:22.987142 19 runners.go:193] Created replication controller with name: externalsvc, namespace: services-9936, replica count: 2 I0417 13:49:26.038699 19 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Apr 17 13:49:26.057: INFO: Creating new exec pod Apr 17 13:49:28.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9936 exec execpodhzx22 -- /bin/sh -x -c nslookup nodeport-service.services-9936.svc.cluster.local' Apr 17 13:49:28.302: INFO: stderr: "+ nslookup nodeport-service.services-9936.svc.cluster.local\n" Apr 17 13:49:28.302: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-9936.svc.cluster.local\tcanonical name = externalsvc.services-9936.svc.cluster.local.\nName:\texternalsvc.services-9936.svc.cluster.local\nAddress: 10.139.64.24\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-9936, will wait for the garbage collector to delete the pods Apr 17 13:49:28.361: INFO: Deleting ReplicationController externalsvc took: 5.445479ms Apr 17 13:49:28.461: INFO: Terminating ReplicationController externalsvc pods took: 100.22845ms Apr 17 13:49:30.676: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:30.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9936" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":61,"skipped":1060,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:30.786: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:49:30.821: INFO: The status of Pod server-envvars-bb3ece35-9178-4058-acff-995a223d8199 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:49:32.826: INFO: The status of Pod server-envvars-bb3ece35-9178-4058-acff-995a223d8199 is Running (Ready = true) Apr 17 13:49:32.846: INFO: Waiting up to 5m0s for pod "client-envvars-332b4fa0-02cb-44cc-b3fe-e02ffb5b96d5" in namespace "pods-792" to be "Succeeded or Failed" Apr 17 13:49:32.849: INFO: Pod "client-envvars-332b4fa0-02cb-44cc-b3fe-e02ffb5b96d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.305062ms Apr 17 13:49:34.854: INFO: Pod "client-envvars-332b4fa0-02cb-44cc-b3fe-e02ffb5b96d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007666335s �[1mSTEP�[0m: Saw pod success Apr 17 13:49:34.854: INFO: Pod "client-envvars-332b4fa0-02cb-44cc-b3fe-e02ffb5b96d5" satisfied condition "Succeeded or Failed" Apr 17 13:49:34.856: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod client-envvars-332b4fa0-02cb-44cc-b3fe-e02ffb5b96d5 container env3cont: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:49:34.869: INFO: Waiting for pod client-envvars-332b4fa0-02cb-44cc-b3fe-e02ffb5b96d5 to disappear Apr 17 13:49:34.872: INFO: Pod client-envvars-332b4fa0-02cb-44cc-b3fe-e02ffb5b96d5 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:34.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-792" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":62,"skipped":1126,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:34.927: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Given a ReplicationController is created �[1mSTEP�[0m: When the matched label of one of its pods change Apr 17 13:49:34.970: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 17 13:49:39.978: INFO: Pod name pod-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:41.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-9752" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":63,"skipped":1163,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:41.067: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:49:43.111: INFO: Deleting pod "var-expansion-2aaf8409-9364-4610-a5b0-5ea9564ec589" in namespace "var-expansion-251" Apr 17 13:49:43.116: INFO: Wait up to 5m0s for pod "var-expansion-2aaf8409-9364-4610-a5b0-5ea9564ec589" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:49:45.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-251" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":64,"skipped":1195,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:45.207: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:49:45.746: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:49:48.767: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Listing all of the created validation webhooks Apr 17 13:49:58.840: INFO: Waiting for webhook configuration to be ready... Apr 17 13:50:08.960: INFO: Waiting for webhook configuration to be ready... Apr 17 13:50:19.062: INFO: Waiting for webhook configuration to be ready... Apr 17 13:50:29.169: INFO: Waiting for webhook configuration to be ready... Apr 17 13:50:39.190: INFO: Waiting for webhook configuration to be ready... Apr 17 13:50:39.190: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606 +0x637 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000232d00, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:50:39.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7791" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7791-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.056 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mlisting validating webhooks should work [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:50:39.190: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":0,"skipped":17,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:43:52.083: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a replication controller Apr 17 13:43:52.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 create -f -' Apr 17 13:43:53.059: INFO: stderr: "" Apr 17 13:43:53.059: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 17 13:43:53.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:43:53.140: INFO: stderr: "" Apr 17 13:43:53.140: INFO: stdout: "update-demo-nautilus-7xscs update-demo-nautilus-ttjhj " Apr 17 13:43:53.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-7xscs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:43:53.209: INFO: stderr: "" Apr 17 13:43:53.209: INFO: stdout: "" Apr 17 13:43:53.209: INFO: update-demo-nautilus-7xscs is created but not running Apr 17 13:43:58.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:43:58.314: INFO: stderr: "" Apr 17 13:43:58.314: INFO: stdout: "update-demo-nautilus-7xscs update-demo-nautilus-ttjhj " Apr 17 13:43:58.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-7xscs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:43:58.380: INFO: stderr: "" Apr 17 13:43:58.380: INFO: stdout: "true" Apr 17 13:43:58.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-7xscs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:43:58.451: INFO: stderr: "" Apr 17 13:43:58.451: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:43:58.451: INFO: validating pod update-demo-nautilus-7xscs Apr 17 13:43:58.465: INFO: got data: { "image": "nautilus.jpg" } Apr 17 13:43:58.465: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 13:43:58.465: INFO: update-demo-nautilus-7xscs is verified up and running Apr 17 13:43:58.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-ttjhj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:43:58.537: INFO: stderr: "" Apr 17 13:43:58.537: INFO: stdout: "true" Apr 17 13:43:58.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-ttjhj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:43:58.603: INFO: stderr: "" Apr 17 13:43:58.603: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:43:58.603: INFO: validating pod update-demo-nautilus-ttjhj Apr 17 13:47:32.090: INFO: update-demo-nautilus-ttjhj is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ttjhj) Apr 17 13:47:37.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:47:37.334: INFO: stderr: "" Apr 17 13:47:37.334: INFO: stdout: "update-demo-nautilus-7xscs update-demo-nautilus-ttjhj " Apr 17 13:47:37.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-7xscs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:47:37.418: INFO: stderr: "" Apr 17 13:47:37.418: INFO: stdout: "true" Apr 17 13:47:37.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-7xscs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:47:37.486: INFO: stderr: "" Apr 17 13:47:37.486: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:47:37.486: INFO: validating pod update-demo-nautilus-7xscs Apr 17 13:47:37.489: INFO: got data: { "image": "nautilus.jpg" } Apr 17 13:47:37.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 13:47:37.489: INFO: update-demo-nautilus-7xscs is verified up and running Apr 17 13:47:37.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-ttjhj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:47:37.558: INFO: stderr: "" Apr 17 13:47:37.558: INFO: stdout: "true" Apr 17 13:47:37.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods update-demo-nautilus-ttjhj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:47:37.623: INFO: stderr: "" Apr 17 13:47:37.623: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:47:37.623: INFO: validating pod update-demo-nautilus-ttjhj Apr 17 13:51:11.230: INFO: update-demo-nautilus-ttjhj is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-ttjhj) Apr 17 13:51:16.231: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 +0x225 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0007bc9c0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Apr 17 13:51:16.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 delete --grace-period=0 --force -f -' Apr 17 13:51:16.303: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 13:51:16.303: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 17 13:51:16.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get rc,svc -l name=update-demo --no-headers' Apr 17 13:51:16.401: INFO: stderr: "No resources found in kubectl-8224 namespace.\n" Apr 17 13:51:16.401: INFO: stdout: "" Apr 17 13:51:16.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8224 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 13:51:16.503: INFO: stderr: "" Apr 17 13:51:16.503: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:16.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8224" for this suite. �[91m�[1m• Failure [444.445 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould create and stop a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:51:16.231: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:49:03.953: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-3779 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-3779 to expose endpoints map[] Apr 17 13:49:03.993: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Apr 17 13:49:05.001: INFO: successfully validated that service multi-endpoint-test in namespace services-3779 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-3779 Apr 17 13:49:05.010: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:49:07.014: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-3779 to expose endpoints map[pod1:[100]] Apr 17 13:49:07.024: INFO: successfully validated that service multi-endpoint-test in namespace services-3779 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-3779 Apr 17 13:49:07.030: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:49:09.035: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-3779 to expose endpoints map[pod1:[100] pod2:[101]] Apr 17 13:49:09.056: INFO: successfully validated that service multi-endpoint-test in namespace services-3779 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Apr 17 13:49:09.056: INFO: Creating new exec pod Apr 17 13:49:12.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:14.212: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:14.212: INFO: stdout: "" Apr 17 13:49:15.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:17.362: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:17.363: INFO: stdout: "" Apr 17 13:49:18.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:20.360: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:20.360: INFO: stdout: "" Apr 17 13:49:21.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:23.353: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:23.353: INFO: stdout: "" Apr 17 13:49:24.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:26.384: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:26.384: INFO: stdout: "" Apr 17 13:49:27.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:29.385: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:29.385: INFO: stdout: "" Apr 17 13:49:30.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:32.355: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:32.355: INFO: stdout: "" Apr 17 13:49:33.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:35.368: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:35.368: INFO: stdout: "" Apr 17 13:49:36.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:38.356: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:38.356: INFO: stdout: "" Apr 17 13:49:39.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:41.347: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:41.347: INFO: stdout: "" Apr 17 13:49:42.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:44.384: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:44.384: INFO: stdout: "" Apr 17 13:49:45.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:47.391: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:47.391: INFO: stdout: "" Apr 17 13:49:48.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:50.358: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:50.358: INFO: stdout: "" Apr 17 13:49:51.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:53.358: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:53.358: INFO: stdout: "" Apr 17 13:49:54.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:56.351: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:56.351: INFO: stdout: "" Apr 17 13:49:57.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:49:59.346: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:49:59.346: INFO: stdout: "" Apr 17 13:50:00.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:02.354: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:02.354: INFO: stdout: "" Apr 17 13:50:03.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:05.368: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:05.368: INFO: stdout: "" Apr 17 13:50:06.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:08.364: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:08.364: INFO: stdout: "" Apr 17 13:50:09.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:11.376: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:11.376: INFO: stdout: "" Apr 17 13:50:12.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:14.362: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:14.362: INFO: stdout: "" Apr 17 13:50:15.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:17.349: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:17.349: INFO: stdout: "" Apr 17 13:50:18.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:20.368: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:20.369: INFO: stdout: "" Apr 17 13:50:21.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:23.367: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:23.368: INFO: stdout: "" Apr 17 13:50:24.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:26.365: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:26.365: INFO: stdout: "" Apr 17 13:50:27.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:29.361: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:29.361: INFO: stdout: "" Apr 17 13:50:30.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:32.360: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:32.361: INFO: stdout: "" Apr 17 13:50:33.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:35.354: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:35.354: INFO: stdout: "" Apr 17 13:50:36.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:38.355: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:38.355: INFO: stdout: "" Apr 17 13:50:39.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:41.422: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:41.422: INFO: stdout: "" Apr 17 13:50:42.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:44.385: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:44.385: INFO: stdout: "" Apr 17 13:50:45.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:47.354: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:47.354: INFO: stdout: "" Apr 17 13:50:48.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:50.387: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:50.387: INFO: stdout: "" Apr 17 13:50:51.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:53.356: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:53.356: INFO: stdout: "" Apr 17 13:50:54.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:56.365: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:56.365: INFO: stdout: "" Apr 17 13:50:57.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:50:59.352: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:50:59.352: INFO: stdout: "" Apr 17 13:51:00.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:51:02.364: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:51:02.364: INFO: stdout: "" Apr 17 13:51:03.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:51:05.372: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:51:05.372: INFO: stdout: "" Apr 17 13:51:06.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:51:08.365: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:51:08.365: INFO: stdout: "" Apr 17 13:51:09.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:51:11.357: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:51:11.358: INFO: stdout: "" Apr 17 13:51:12.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:51:14.356: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:51:14.356: INFO: stdout: "" Apr 17 13:51:14.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3779 exec execpodqd8jl -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:51:16.498: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:51:16.498: INFO: stdout: "" Apr 17 13:51:16.498: FAIL: Unexpected error: <*errors.errorString | 0xc002d3c530>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:913 +0x7c6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000cf8b60, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:16.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3779" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[91m�[1m• Failure [132.663 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould serve multiport endpoints from pods [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:51:16.498: Unexpected error: <*errors.errorString | 0xc002d3c530>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:913 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":0,"skipped":17,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:16.531: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a replication controller Apr 17 13:51:16.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 create -f -' Apr 17 13:51:17.344: INFO: stderr: "" Apr 17 13:51:17.344: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 17 13:51:17.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:51:17.450: INFO: stderr: "" Apr 17 13:51:17.450: INFO: stdout: "update-demo-nautilus-rq6gc update-demo-nautilus-wvpdk " Apr 17 13:51:17.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods update-demo-nautilus-rq6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:51:17.525: INFO: stderr: "" Apr 17 13:51:17.525: INFO: stdout: "" Apr 17 13:51:17.525: INFO: update-demo-nautilus-rq6gc is created but not running Apr 17 13:51:22.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 17 13:51:22.597: INFO: stderr: "" Apr 17 13:51:22.597: INFO: stdout: "update-demo-nautilus-rq6gc update-demo-nautilus-wvpdk " Apr 17 13:51:22.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods update-demo-nautilus-rq6gc -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:51:22.668: INFO: stderr: "" Apr 17 13:51:22.668: INFO: stdout: "true" Apr 17 13:51:22.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods update-demo-nautilus-rq6gc -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:51:22.738: INFO: stderr: "" Apr 17 13:51:22.738: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:51:22.738: INFO: validating pod update-demo-nautilus-rq6gc Apr 17 13:51:22.742: INFO: got data: { "image": "nautilus.jpg" } Apr 17 13:51:22.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 13:51:22.742: INFO: update-demo-nautilus-rq6gc is verified up and running Apr 17 13:51:22.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods update-demo-nautilus-wvpdk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 17 13:51:22.811: INFO: stderr: "" Apr 17 13:51:22.812: INFO: stdout: "true" Apr 17 13:51:22.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods update-demo-nautilus-wvpdk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 17 13:51:22.891: INFO: stderr: "" Apr 17 13:51:22.892: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 17 13:51:22.892: INFO: validating pod update-demo-nautilus-wvpdk Apr 17 13:51:22.896: INFO: got data: { "image": "nautilus.jpg" } Apr 17 13:51:22.896: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 17 13:51:22.896: INFO: update-demo-nautilus-wvpdk is verified up and running �[1mSTEP�[0m: using delete to clean up resources Apr 17 13:51:22.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 delete --grace-period=0 --force -f -' Apr 17 13:51:22.965: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 17 13:51:22.965: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 17 13:51:22.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get rc,svc -l name=update-demo --no-headers' Apr 17 13:51:23.055: INFO: stderr: "No resources found in kubectl-3758 namespace.\n" Apr 17 13:51:23.055: INFO: stdout: "" Apr 17 13:51:23.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3758 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 17 13:51:23.141: INFO: stderr: "" Apr 17 13:51:23.141: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:23.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3758" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":1,"skipped":17,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":27,"skipped":519,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:16.621: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-9849 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9849 to expose endpoints map[] Apr 17 13:51:16.754: INFO: successfully validated that service multi-endpoint-test in namespace services-9849 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-9849 Apr 17 13:51:16.817: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:51:18.821: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9849 to expose endpoints map[pod1:[100]] Apr 17 13:51:18.844: INFO: successfully validated that service multi-endpoint-test in namespace services-9849 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-9849 Apr 17 13:51:18.854: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:51:20.858: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9849 to expose endpoints map[pod1:[100] pod2:[101]] Apr 17 13:51:20.872: INFO: successfully validated that service multi-endpoint-test in namespace services-9849 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Apr 17 13:51:20.872: INFO: Creating new exec pod Apr 17 13:51:23.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9849 exec execpod2nbb7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 17 13:51:24.132: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 17 13:51:24.132: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:51:24.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9849 exec execpod2nbb7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.32.248 80' Apr 17 13:51:24.293: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.32.248 80\nConnection to 10.131.32.248 80 port [tcp/http] succeeded!\n" Apr 17 13:51:24.293: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:51:24.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9849 exec execpod2nbb7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Apr 17 13:51:24.488: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Apr 17 13:51:24.488: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:51:24.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9849 exec execpod2nbb7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.32.248 81' Apr 17 13:51:24.681: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.32.248 81\nConnection to 10.131.32.248 81 port [tcp/*] succeeded!\n" Apr 17 13:51:24.681: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-9849 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9849 to expose endpoints map[pod2:[101]] Apr 17 13:51:24.731: INFO: successfully validated that service multi-endpoint-test in namespace services-9849 exposes endpoints map[pod2:[101]] �[1mSTEP�[0m: Deleting pod pod2 in namespace services-9849 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-9849 to expose endpoints map[] Apr 17 13:51:24.772: INFO: successfully validated that service multi-endpoint-test in namespace services-9849 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:24.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9849" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":28,"skipped":519,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":64,"skipped":1241,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:50:39.267: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:50:39.851: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:50:42.874: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Listing all of the created validation webhooks Apr 17 13:50:52.943: INFO: Waiting for webhook configuration to be ready... Apr 17 13:51:03.065: INFO: Waiting for webhook configuration to be ready... Apr 17 13:51:13.169: INFO: Waiting for webhook configuration to be ready... Apr 17 13:51:23.266: INFO: Waiting for webhook configuration to be ready... Apr 17 13:51:33.434: INFO: Waiting for webhook configuration to be ready... Apr 17 13:51:33.434: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.18() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606 +0x637 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000232d00, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:33.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1579" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1579-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.583 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mlisting validating webhooks should work [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:51:33.434: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:606 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:24.853: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted Apr 17 13:51:31.099: INFO: 80 pods remaining Apr 17 13:51:31.099: INFO: 80 pods has nil DeletionTimestamp Apr 17 13:51:31.099: INFO: Apr 17 13:51:32.049: INFO: 71 pods remaining Apr 17 13:51:32.049: INFO: 71 pods has nil DeletionTimestamp Apr 17 13:51:32.050: INFO: Apr 17 13:51:33.012: INFO: 60 pods remaining Apr 17 13:51:33.012: INFO: 60 pods has nil DeletionTimestamp Apr 17 13:51:33.012: INFO: Apr 17 13:51:34.067: INFO: 40 pods remaining Apr 17 13:51:34.067: INFO: 40 pods has nil DeletionTimestamp Apr 17 13:51:34.067: INFO: Apr 17 13:51:35.082: INFO: 31 pods remaining Apr 17 13:51:35.082: INFO: 30 pods has nil DeletionTimestamp Apr 17 13:51:35.082: INFO: Apr 17 13:51:36.052: INFO: 19 pods remaining Apr 17 13:51:36.052: INFO: 19 pods has nil DeletionTimestamp Apr 17 13:51:36.052: INFO: Apr 17 13:51:36.999: INFO: 0 pods remaining Apr 17 13:51:36.999: INFO: 0 pods has nil DeletionTimestamp Apr 17 13:51:36.999: INFO: �[1mSTEP�[0m: Gathering metrics Apr 17 13:51:38.079: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-4exvhp-control-plane-ss4pf is Running (Ready = true) Apr 17 13:51:38.303: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:38.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3795" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":29,"skipped":528,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:38.363: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Apr 17 13:51:38.436: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Apr 17 13:51:38.436: INFO: cleanMinorVersion: 23 Apr 17 13:51:38.436: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:38.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-2039" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":30,"skipped":544,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:38.609: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 17 13:51:46.882: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:46.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-133" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":603,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":64,"skipped":1241,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:33.853: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:51:34.516: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 17 13:51:36.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 13:51:38.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 13:51:40.574: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 13:51:42.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 13:51:44.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 17 13:51:46.558: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 51, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:51:49.575: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:49.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5385" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5385-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":65,"skipped":1241,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:47.028: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-a3267570-d344-4a3a-9c8b-95a3abbabc3c �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:51.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5219" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":694,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:23.163: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-8068 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 17 13:51:23.191: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 17 13:51:23.242: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:51:25.246: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:51:27.247: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:51:29.251: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:51:31.288: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:51:33.251: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:51:35.292: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 17 13:51:37.247: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 17 13:51:37.266: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 13:51:39.275: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 13:51:41.274: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 13:51:43.271: INFO: The status of Pod netserver-1 is Running (Ready = false) Apr 17 13:51:45.270: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 17 13:51:45.275: INFO: The status of Pod netserver-2 is Running (Ready = false) Apr 17 13:51:47.279: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 17 13:51:47.284: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 17 13:51:51.299: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 17 13:51:51.299: INFO: Breadth first check of 192.168.2.42 on host 172.18.0.6... Apr 17 13:51:51.301: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.2.42&port=8083&tries=1'] Namespace:pod-network-test-8068 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:51:51.302: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:51:51.302: INFO: ExecWithOptions: Clientset creation Apr 17 13:51:51.302: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-8068/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.46%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.42%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:51:51.394: INFO: Waiting for responses: map[] Apr 17 13:51:51.394: INFO: reached 192.168.2.42 after 0/1 tries Apr 17 13:51:51.394: INFO: Breadth first check of 192.168.0.30 on host 172.18.0.4... Apr 17 13:51:51.398: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.0.30&port=8083&tries=1'] Namespace:pod-network-test-8068 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:51:51.398: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:51:51.398: INFO: ExecWithOptions: Clientset creation Apr 17 13:51:51.398: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-8068/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.46%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.30%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:51:51.491: INFO: Waiting for responses: map[] Apr 17 13:51:51.491: INFO: reached 192.168.0.30 after 0/1 tries Apr 17 13:51:51.491: INFO: Breadth first check of 192.168.3.52 on host 172.18.0.7... Apr 17 13:51:51.494: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.3.52&port=8083&tries=1'] Namespace:pod-network-test-8068 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:51:51.494: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:51:51.495: INFO: ExecWithOptions: Clientset creation Apr 17 13:51:51.495: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-8068/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.46%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.3.52%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:51:51.586: INFO: Waiting for responses: map[] Apr 17 13:51:51.586: INFO: reached 192.168.3.52 after 0/1 tries Apr 17 13:51:51.586: INFO: Breadth first check of 192.168.6.20 on host 172.18.0.5... Apr 17 13:51:51.589: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.6.46:9080/dial?request=hostname&protocol=http&host=192.168.6.20&port=8083&tries=1'] Namespace:pod-network-test-8068 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:51:51.589: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:51:51.590: INFO: ExecWithOptions: Clientset creation Apr 17 13:51:51.590: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-8068/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.6.46%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.6.20%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:51:51.684: INFO: Waiting for responses: map[] Apr 17 13:51:51.684: INFO: reached 192.168.6.20 after 0/1 tries Apr 17 13:51:51.684: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:51.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-8068" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:51.107: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 17 13:51:51.144: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-2520 0e1bba9b-3b9d-4c67-9080-13564c2970f8 11938 0 2022-04-17 13:51:51 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2022-04-17 13:51:51 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nn6tr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nn6tr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 17 13:51:51.152: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:51:53.155: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Apr 17 13:51:53.155: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2520 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:51:53.155: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:51:53.156: INFO: ExecWithOptions: Clientset creation Apr 17 13:51:53.157: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-2520/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Apr 17 13:51:53.255: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2520 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 17 13:51:53.255: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 17 13:51:53.256: INFO: ExecWithOptions: Clientset creation Apr 17 13:51:53.256: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-2520/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 17 13:51:53.360: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:53.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2520" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":33,"skipped":700,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:53.395: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Starting the proxy Apr 17 13:51:53.444: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8481 proxy --unix-socket=/tmp/kubectl-proxy-unix2006353881/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:53.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8481" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":34,"skipped":710,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:49.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:51:49.906: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 17 13:51:53.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8710 --namespace=crd-publish-openapi-8710 create -f -' Apr 17 13:51:54.013: INFO: stderr: "" Apr 17 13:51:54.013: INFO: stdout: "e2e-test-crd-publish-openapi-5260-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 17 13:51:54.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8710 --namespace=crd-publish-openapi-8710 delete e2e-test-crd-publish-openapi-5260-crds test-cr' Apr 17 13:51:54.104: INFO: stderr: "" Apr 17 13:51:54.104: INFO: stdout: "e2e-test-crd-publish-openapi-5260-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 17 13:51:54.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8710 --namespace=crd-publish-openapi-8710 apply -f -' Apr 17 13:51:54.315: INFO: stderr: "" Apr 17 13:51:54.316: INFO: stdout: "e2e-test-crd-publish-openapi-5260-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 17 13:51:54.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8710 --namespace=crd-publish-openapi-8710 delete e2e-test-crd-publish-openapi-5260-crds test-cr' Apr 17 13:51:54.385: INFO: stderr: "" Apr 17 13:51:54.385: INFO: stdout: "e2e-test-crd-publish-openapi-5260-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Apr 17 13:51:54.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8710 explain e2e-test-crd-publish-openapi-5260-crds' Apr 17 13:51:54.586: INFO: stderr: "" Apr 17 13:51:54.586: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-5260-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:56.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-8710" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":66,"skipped":1300,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:56.882: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-3c9876ae-f9d8-489b-9de3-e647ced8d4e5 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 17 13:51:56.951: INFO: Waiting up to 5m0s for pod "pod-secrets-1ec86574-3a0f-42d7-ab12-a15eb9aa1cb4" in namespace "secrets-7047" to be "Succeeded or Failed" Apr 17 13:51:56.966: INFO: Pod "pod-secrets-1ec86574-3a0f-42d7-ab12-a15eb9aa1cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.729347ms Apr 17 13:51:58.971: INFO: Pod "pod-secrets-1ec86574-3a0f-42d7-ab12-a15eb9aa1cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.020340566s �[1mSTEP�[0m: Saw pod success Apr 17 13:51:58.971: INFO: Pod "pod-secrets-1ec86574-3a0f-42d7-ab12-a15eb9aa1cb4" satisfied condition "Succeeded or Failed" Apr 17 13:51:58.974: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod pod-secrets-1ec86574-3a0f-42d7-ab12-a15eb9aa1cb4 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:51:58.992: INFO: Waiting for pod pod-secrets-1ec86574-3a0f-42d7-ab12-a15eb9aa1cb4 to disappear Apr 17 13:51:58.995: INFO: Pod pod-secrets-1ec86574-3a0f-42d7-ab12-a15eb9aa1cb4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:51:58.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-7047" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1317,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:51.702: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-czss �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 17 13:51:51.797: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-czss" in namespace "subpath-7963" to be "Succeeded or Failed" Apr 17 13:51:51.801: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Pending", Reason="", readiness=false. Elapsed: 3.681918ms Apr 17 13:51:53.805: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 2.007376765s Apr 17 13:51:55.819: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 4.021446276s Apr 17 13:51:57.823: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 6.025606706s Apr 17 13:51:59.828: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 8.030899651s Apr 17 13:52:01.833: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 10.03558463s Apr 17 13:52:03.839: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 12.041189908s Apr 17 13:52:05.843: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 14.045580286s Apr 17 13:52:07.848: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 16.050779985s Apr 17 13:52:09.853: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 18.055220517s Apr 17 13:52:11.857: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Running", Reason="", readiness=true. Elapsed: 20.059336209s Apr 17 13:52:13.862: INFO: Pod "pod-subpath-test-configmap-czss": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.064598014s �[1mSTEP�[0m: Saw pod success Apr 17 13:52:13.862: INFO: Pod "pod-subpath-test-configmap-czss" satisfied condition "Succeeded or Failed" Apr 17 13:52:13.865: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod pod-subpath-test-configmap-czss container test-container-subpath-configmap-czss: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:52:13.894: INFO: Waiting for pod pod-subpath-test-configmap-czss to disappear Apr 17 13:52:13.896: INFO: Pod pod-subpath-test-configmap-czss no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-czss Apr 17 13:52:13.896: INFO: Deleting pod "pod-subpath-test-configmap-czss" in namespace "subpath-7963" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:13.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-7963" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":3,"skipped":24,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:13.912: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9773.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9773.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9773.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9773.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 17 13:52:19.987: INFO: DNS probes using dns-9773/dns-test-ea039280-475f-4ebe-9728-7e145c9b8349 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:20.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-9773" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":25,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:20.061: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:52:20.329: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Apr 17 13:52:22.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 17, 13, 52, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 52, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 17, 13, 52, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 17, 13, 52, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:52:25.542: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:52:25.546: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-7298-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:28.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5680" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5680-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":5,"skipped":60,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:28.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service endpoint-test2 in namespace services-6720 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6720 to expose endpoints map[] Apr 17 13:52:28.815: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Apr 17 13:52:29.822: INFO: successfully validated that service endpoint-test2 in namespace services-6720 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-6720 Apr 17 13:52:29.832: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:52:31.836: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6720 to expose endpoints map[pod1:[80]] Apr 17 13:52:31.846: INFO: successfully validated that service endpoint-test2 in namespace services-6720 exposes endpoints map[pod1:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 Apr 17 13:52:31.846: INFO: Creating new exec pod Apr 17 13:52:34.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6720 exec execpod9jxx8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 17 13:52:35.009: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 17 13:52:35.009: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:52:35.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6720 exec execpod9jxx8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.48.96 80' Apr 17 13:52:35.161: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.48.96 80\nConnection to 10.129.48.96 80 port [tcp/http] succeeded!\n" Apr 17 13:52:35.161: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Creating pod pod2 in namespace services-6720 Apr 17 13:52:35.169: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:52:37.172: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6720 to expose endpoints map[pod1:[80] pod2:[80]] Apr 17 13:52:37.185: INFO: successfully validated that service endpoint-test2 in namespace services-6720 exposes endpoints map[pod1:[80] pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 and pod2 Apr 17 13:52:38.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6720 exec execpod9jxx8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 17 13:52:38.327: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 17 13:52:38.327: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:52:38.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6720 exec execpod9jxx8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.48.96 80' Apr 17 13:52:38.468: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.48.96 80\nConnection to 10.129.48.96 80 port [tcp/http] succeeded!\n" Apr 17 13:52:38.468: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-6720 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6720 to expose endpoints map[pod2:[80]] Apr 17 13:52:38.503: INFO: successfully validated that service endpoint-test2 in namespace services-6720 exposes endpoints map[pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod2 Apr 17 13:52:39.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6720 exec execpod9jxx8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 17 13:52:39.647: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 17 13:52:39.647: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:52:39.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6720 exec execpod9jxx8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.48.96 80' Apr 17 13:52:39.802: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.48.96 80\nConnection to 10.129.48.96 80 port [tcp/http] succeeded!\n" Apr 17 13:52:39.802: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod2 in namespace services-6720 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6720 to expose endpoints map[] Apr 17 13:52:40.824: INFO: successfully validated that service endpoint-test2 in namespace services-6720 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:40.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6720" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":6,"skipped":66,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:40.866: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:52:40.913: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:41.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3765" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":7,"skipped":67,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:41.484: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: validating api versions Apr 17 13:52:41.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9062 api-versions' Apr 17 13:52:41.592: INFO: stderr: "" Apr 17 13:52:41.592: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:41.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9062" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":8,"skipped":91,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:41.606: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:52:41.664: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-f8d0c4bb-b9c7-4fc2-b4e5-cb90b835cf63" in namespace "security-context-test-5869" to be "Succeeded or Failed" Apr 17 13:52:41.675: INFO: Pod "busybox-privileged-false-f8d0c4bb-b9c7-4fc2-b4e5-cb90b835cf63": Phase="Pending", Reason="", readiness=false. Elapsed: 10.224303ms Apr 17 13:52:43.679: INFO: Pod "busybox-privileged-false-f8d0c4bb-b9c7-4fc2-b4e5-cb90b835cf63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014160282s Apr 17 13:52:43.679: INFO: Pod "busybox-privileged-false-f8d0c4bb-b9c7-4fc2-b4e5-cb90b835cf63" satisfied condition "Succeeded or Failed" Apr 17 13:52:43.684: INFO: Got logs for pod "busybox-privileged-false-f8d0c4bb-b9c7-4fc2-b4e5-cb90b835cf63": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:43.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-5869" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":94,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:43.800: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:52:44.186: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:52:47.207: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:47.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8162" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8162-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":10,"skipped":179,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:47.297: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Apr 17 13:52:47.336: INFO: Waiting up to 5m0s for pod "pod-f6eb3c00-d4ec-45da-93e2-f4596faea87a" in namespace "emptydir-4673" to be "Succeeded or Failed" Apr 17 13:52:47.339: INFO: Pod "pod-f6eb3c00-d4ec-45da-93e2-f4596faea87a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.828263ms Apr 17 13:52:49.343: INFO: Pod "pod-f6eb3c00-d4ec-45da-93e2-f4596faea87a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007115811s �[1mSTEP�[0m: Saw pod success Apr 17 13:52:49.343: INFO: Pod "pod-f6eb3c00-d4ec-45da-93e2-f4596faea87a" satisfied condition "Succeeded or Failed" Apr 17 13:52:49.346: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod pod-f6eb3c00-d4ec-45da-93e2-f4596faea87a container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:52:49.359: INFO: Waiting for pod pod-f6eb3c00-d4ec-45da-93e2-f4596faea87a to disappear Apr 17 13:52:49.362: INFO: Pod pod-f6eb3c00-d4ec-45da-93e2-f4596faea87a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:49.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4673" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":188,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:49.404: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 17 13:52:49.442: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 17 13:52:49.445: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 17 13:52:49.458: INFO: waiting for watch events with expected annotations Apr 17 13:52:49.458: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:49.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-6890" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":12,"skipped":214,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:49.502: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Apr 17 13:52:50.579: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-4exvhp-control-plane-ss4pf is Running (Ready = true) Apr 17 13:52:50.708: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:50.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3648" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":13,"skipped":216,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:50.721: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod Apr 17 13:52:50.751: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:53.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-224" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":14,"skipped":218,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:53.914: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:52:53.952: INFO: Waiting up to 5m0s for pod "downwardapi-volume-350035de-e91b-4d3f-8e22-b807fca5a607" in namespace "downward-api-2588" to be "Succeeded or Failed" Apr 17 13:52:53.956: INFO: Pod "downwardapi-volume-350035de-e91b-4d3f-8e22-b807fca5a607": Phase="Pending", Reason="", readiness=false. Elapsed: 2.98745ms Apr 17 13:52:55.962: INFO: Pod "downwardapi-volume-350035de-e91b-4d3f-8e22-b807fca5a607": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00893756s �[1mSTEP�[0m: Saw pod success Apr 17 13:52:55.962: INFO: Pod "downwardapi-volume-350035de-e91b-4d3f-8e22-b807fca5a607" satisfied condition "Succeeded or Failed" Apr 17 13:52:55.973: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-bdcgq2 pod downwardapi-volume-350035de-e91b-4d3f-8e22-b807fca5a607 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:52:56.009: INFO: Waiting for pod downwardapi-volume-350035de-e91b-4d3f-8e22-b807fca5a607 to disappear Apr 17 13:52:56.012: INFO: Pod downwardapi-volume-350035de-e91b-4d3f-8e22-b807fca5a607 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:56.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2588" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":218,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:56.076: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:52:56.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4522 version' Apr 17 13:52:56.200: INFO: stderr: "" Apr 17 13:52:56.200: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.5\", GitCommit:\"c285e781331a3785a7f436042c65c5641ce8a9e9\", GitTreeState:\"clean\", BuildDate:\"2022-03-16T15:58:47Z\", GoVersion:\"go1.17.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.5\", GitCommit:\"c285e781331a3785a7f436042c65c5641ce8a9e9\", GitTreeState:\"clean\", BuildDate:\"2022-03-24T22:06:50Z\", GoVersion:\"go1.17.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:56.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4522" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":16,"skipped":245,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:56.243: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 17 13:52:56.283: INFO: Waiting up to 5m0s for pod "pod-e046a5ad-8e21-464a-9515-01bbaf3f5722" in namespace "emptydir-4562" to be "Succeeded or Failed" Apr 17 13:52:56.287: INFO: Pod "pod-e046a5ad-8e21-464a-9515-01bbaf3f5722": Phase="Pending", Reason="", readiness=false. Elapsed: 3.452895ms Apr 17 13:52:58.292: INFO: Pod "pod-e046a5ad-8e21-464a-9515-01bbaf3f5722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008148675s �[1mSTEP�[0m: Saw pod success Apr 17 13:52:58.292: INFO: Pod "pod-e046a5ad-8e21-464a-9515-01bbaf3f5722" satisfied condition "Succeeded or Failed" Apr 17 13:52:58.294: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod pod-e046a5ad-8e21-464a-9515-01bbaf3f5722 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:52:58.310: INFO: Waiting for pod pod-e046a5ad-8e21-464a-9515-01bbaf3f5722 to disappear Apr 17 13:52:58.313: INFO: Pod pod-e046a5ad-8e21-464a-9515-01bbaf3f5722 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:52:58.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4562" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":273,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:52:58.358: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Apr 17 13:52:58.400: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:53:03.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5946" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":302,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:53:03.125: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-map-d35f10db-dcfe-41a1-80a0-3b64991c4b75 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 17 13:53:03.171: INFO: Waiting up to 5m0s for pod "pod-secrets-cd0048f9-b24f-4e77-b715-9372fdb5535a" in namespace "secrets-9135" to be "Succeeded or Failed" Apr 17 13:53:03.178: INFO: Pod "pod-secrets-cd0048f9-b24f-4e77-b715-9372fdb5535a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.551134ms Apr 17 13:53:05.184: INFO: Pod "pod-secrets-cd0048f9-b24f-4e77-b715-9372fdb5535a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013284982s �[1mSTEP�[0m: Saw pod success Apr 17 13:53:05.184: INFO: Pod "pod-secrets-cd0048f9-b24f-4e77-b715-9372fdb5535a" satisfied condition "Succeeded or Failed" Apr 17 13:53:05.187: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod pod-secrets-cd0048f9-b24f-4e77-b715-9372fdb5535a container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:53:05.200: INFO: Waiting for pod pod-secrets-cd0048f9-b24f-4e77-b715-9372fdb5535a to disappear Apr 17 13:53:05.202: INFO: Pod pod-secrets-cd0048f9-b24f-4e77-b715-9372fdb5535a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:53:05.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9135" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":358,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:53:05.225: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:53:05.260: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:53:07.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:09.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:11.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:13.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:15.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:17.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:19.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:21.265: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:23.264: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:25.266: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = false) Apr 17 13:53:27.265: INFO: The status of Pod test-webserver-765676a6-b51f-4b43-a4f6-dc68eccb26a6 is Running (Ready = true) Apr 17 13:53:27.268: INFO: Container started at 2022-04-17 13:53:05 +0000 UTC, pod became ready at 2022-04-17 13:53:25 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:53:27.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5023" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":369,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:53:27.350: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:53:27.674: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:53:30.696: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:53:30.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-834" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-834-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":21,"skipped":431,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:53:30.810: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-d10072e4-43f1-4162-ad8b-cf007009d119 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:53:30.851: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc54ae20-fc15-4df1-bd4a-f72df32fa30f" in namespace "configmap-5390" to be "Succeeded or Failed" Apr 17 13:53:30.855: INFO: Pod "pod-configmaps-fc54ae20-fc15-4df1-bd4a-f72df32fa30f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53809ms Apr 17 13:53:32.860: INFO: Pod "pod-configmaps-fc54ae20-fc15-4df1-bd4a-f72df32fa30f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009223059s �[1mSTEP�[0m: Saw pod success Apr 17 13:53:32.860: INFO: Pod "pod-configmaps-fc54ae20-fc15-4df1-bd4a-f72df32fa30f" satisfied condition "Succeeded or Failed" Apr 17 13:53:32.863: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-configmaps-fc54ae20-fc15-4df1-bd4a-f72df32fa30f container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:53:32.887: INFO: Waiting for pod pod-configmaps-fc54ae20-fc15-4df1-bd4a-f72df32fa30f to disappear Apr 17 13:53:32.890: INFO: Pod pod-configmaps-fc54ae20-fc15-4df1-bd4a-f72df32fa30f no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:53:32.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5390" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":442,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:53:32.996: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:53:33.022: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Apr 17 13:53:35.049: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:53:36.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-4877" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":23,"skipped":518,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:53:36.090: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-zrmz �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 17 13:53:36.130: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zrmz" in namespace "subpath-5584" to be "Succeeded or Failed" Apr 17 13:53:36.132: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Pending", Reason="", readiness=false. Elapsed: 2.475537ms Apr 17 13:53:38.136: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 2.006086753s Apr 17 13:53:40.139: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 4.00987544s Apr 17 13:53:42.144: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 6.013965366s Apr 17 13:53:44.147: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 8.017868124s Apr 17 13:53:46.151: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 10.021358167s Apr 17 13:53:48.155: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 12.025301501s Apr 17 13:53:50.159: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 14.029570535s Apr 17 13:53:52.163: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 16.033349765s Apr 17 13:53:54.167: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 18.037755937s Apr 17 13:53:56.172: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Running", Reason="", readiness=true. Elapsed: 20.042252264s Apr 17 13:53:58.176: INFO: Pod "pod-subpath-test-configmap-zrmz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.046086839s �[1mSTEP�[0m: Saw pod success Apr 17 13:53:58.176: INFO: Pod "pod-subpath-test-configmap-zrmz" satisfied condition "Succeeded or Failed" Apr 17 13:53:58.179: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-subpath-test-configmap-zrmz container test-container-subpath-configmap-zrmz: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:53:58.191: INFO: Waiting for pod pod-subpath-test-configmap-zrmz to disappear Apr 17 13:53:58.193: INFO: Pod pod-subpath-test-configmap-zrmz no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-zrmz Apr 17 13:53:58.193: INFO: Deleting pod "pod-subpath-test-configmap-zrmz" in namespace "subpath-5584" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:53:58.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-5584" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":24,"skipped":537,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:53:58.229: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:53:58.264: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7cc5615f-8544-45c6-a241-d9b42577ea18" in namespace "downward-api-4854" to be "Succeeded or Failed" Apr 17 13:53:58.268: INFO: Pod "downwardapi-volume-7cc5615f-8544-45c6-a241-d9b42577ea18": Phase="Pending", Reason="", readiness=false. Elapsed: 3.14651ms Apr 17 13:54:00.272: INFO: Pod "downwardapi-volume-7cc5615f-8544-45c6-a241-d9b42577ea18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006976595s �[1mSTEP�[0m: Saw pod success Apr 17 13:54:00.272: INFO: Pod "downwardapi-volume-7cc5615f-8544-45c6-a241-d9b42577ea18" satisfied condition "Succeeded or Failed" Apr 17 13:54:00.275: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod downwardapi-volume-7cc5615f-8544-45c6-a241-d9b42577ea18 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:54:00.288: INFO: Waiting for pod downwardapi-volume-7cc5615f-8544-45c6-a241-d9b42577ea18 to disappear Apr 17 13:54:00.291: INFO: Pod downwardapi-volume-7cc5615f-8544-45c6-a241-d9b42577ea18 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:00.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4854" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":557,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:00.321: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:04.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-3524" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":26,"skipped":573,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:04.437: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:54:04.482: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 17 13:54:09.488: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 17 13:54:09.488: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 17 13:54:09.509: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6563 e1efbae5-4aff-4b22-88f7-b894a4c0d389 13341 1 2022-04-17 13:54:09 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2022-04-17 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005082128 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 17 13:54:09.512: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil. Apr 17 13:54:09.512: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 17 13:54:09.512: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6563 90d10608-7e54-43ca-88d1-eea126a934a7 13343 1 2022-04-17 13:54:04 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment e1efbae5-4aff-4b22-88f7-b894a4c0d389 0xc005082467 0xc005082468}] [] [{e2e.test Update apps/v1 2022-04-17 13:54:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-17 13:54:05 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2022-04-17 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"e1efbae5-4aff-4b22-88f7-b894a4c0d389\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005082528 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 17 13:54:09.517: INFO: Pod "test-cleanup-controller-4c68n" is available: &Pod{ObjectMeta:{test-cleanup-controller-4c68n test-cleanup-controller- deployment-6563 1910f258-b3f3-4f7a-9f0a-c7bd8d612986 13321 0 2022-04-17 13:54:04 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 90d10608-7e54-43ca-88d1-eea126a934a7 0xc004ffc4a7 0xc004ffc4a8}] [] [{kube-controller-manager Update v1 2022-04-17 13:54:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"90d10608-7e54-43ca-88d1-eea126a934a7\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-17 13:54:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.61\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zjmgh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjmgh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:54:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:54:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:54:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-17 13:54:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.61,StartTime:2022-04-17 13:54:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-17 13:54:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0d68f6ef0c6cec07184ce7ec628b2d06715804d6cec2d67a2baef95b951c9420,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.61,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:09.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6563" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":27,"skipped":578,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:09.540: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating replica set "test-rs" that asks for more than the allowed pod quota Apr 17 13:54:09.582: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 17 13:54:14.588: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the replicaset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:14.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-4468" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":28,"skipped":583,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:14.715: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Apr 17 13:54:14.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8486 19fdaf39-759c-42a8-9b4b-e52bffd09291 13416 0 2022-04-17 13:54:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-17 13:54:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:54:14.760: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8486 19fdaf39-759c-42a8-9b4b-e52bffd09291 13417 0 2022-04-17 13:54:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-17 13:54:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 17 13:54:14.773: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8486 19fdaf39-759c-42a8-9b4b-e52bffd09291 13419 0 2022-04-17 13:54:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-17 13:54:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:54:14.773: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8486 19fdaf39-759c-42a8-9b4b-e52bffd09291 13420 0 2022-04-17 13:54:14 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-17 13:54:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:14.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-8486" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":29,"skipped":632,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:14.804: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Apr 17 13:54:14.838: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Apr 17 13:54:14.844: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Apr 17 13:54:14.844: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Apr 17 13:54:14.851: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Apr 17 13:54:14.851: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Apr 17 13:54:14.862: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Apr 17 13:54:14.862: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Apr 17 13:54:21.893: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:21.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-7512" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":30,"skipped":641,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:21.919: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 17 13:54:21.962: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 17 13:54:21.965: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 17 13:54:21.976: INFO: waiting for watch events with expected annotations Apr 17 13:54:21.976: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:22.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-1257" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":31,"skipped":648,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:22.031: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: set up a multi version CRD Apr 17 13:54:22.054: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:34.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3297" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":32,"skipped":662,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:34.862: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create a Replicaset �[1mSTEP�[0m: Verify that the required pods have come up. Apr 17 13:54:34.899: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 17 13:54:39.905: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Getting /status Apr 17 13:54:39.911: INFO: Replicaset test-rs has Conditions: [] �[1mSTEP�[0m: updating the Replicaset Status Apr 17 13:54:39.920: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the ReplicaSet status to be updated Apr 17 13:54:39.922: INFO: Observed &ReplicaSet event: ADDED Apr 17 13:54:39.922: INFO: Observed &ReplicaSet event: MODIFIED Apr 17 13:54:39.922: INFO: Observed &ReplicaSet event: MODIFIED Apr 17 13:54:39.923: INFO: Observed &ReplicaSet event: MODIFIED Apr 17 13:54:39.923: INFO: Found replicaset test-rs in namespace replicaset-4797 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 17 13:54:39.923: INFO: Replicaset test-rs has an updated status �[1mSTEP�[0m: patching the Replicaset Status Apr 17 13:54:39.923: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Apr 17 13:54:39.928: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Replicaset status to be patched Apr 17 13:54:39.931: INFO: Observed &ReplicaSet event: ADDED Apr 17 13:54:39.931: INFO: Observed &ReplicaSet event: MODIFIED Apr 17 13:54:39.931: INFO: Observed &ReplicaSet event: MODIFIED Apr 17 13:54:39.931: INFO: Observed &ReplicaSet event: MODIFIED Apr 17 13:54:39.931: INFO: Observed replicaset test-rs in namespace replicaset-4797 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 17 13:54:39.931: INFO: Observed &ReplicaSet event: MODIFIED Apr 17 13:54:39.931: INFO: Found replicaset test-rs in namespace replicaset-4797 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } Apr 17 13:54:39.931: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:39.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-4797" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":33,"skipped":664,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:39.979: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-9024 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-9024 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-9024 I0417 13:54:40.030957 15 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-9024, replica count: 3 I0417 13:54:43.081594 15 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:54:43.087: INFO: Creating new exec pod Apr 17 13:54:46.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9024 exec execpod-affinitysk8q5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Apr 17 13:54:46.239: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Apr 17 13:54:46.240: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:54:46.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9024 exec execpod-affinitysk8q5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.249.117 80' Apr 17 13:54:46.395: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.249.117 80\nConnection to 10.131.249.117 80 port [tcp/http] succeeded!\n" Apr 17 13:54:46.395: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 17 13:54:46.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9024 exec execpod-affinitysk8q5 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.131.249.117:80/ ; done' Apr 17 13:54:46.668: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.249.117:80/\n" Apr 17 13:54:46.668: INFO: stdout: "\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7\naffinity-clusterip-n94v7" Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Received response from host: affinity-clusterip-n94v7 Apr 17 13:54:46.668: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-9024, will wait for the garbage collector to delete the pods Apr 17 13:54:46.738: INFO: Deleting ReplicationController affinity-clusterip took: 5.55569ms Apr 17 13:54:46.838: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.871227ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:48.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9024" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":34,"skipped":688,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:48.992: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:54:49.019: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 17 13:54:51.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2338 --namespace=crd-publish-openapi-2338 create -f -' Apr 17 13:54:51.905: INFO: stderr: "" Apr 17 13:54:51.906: INFO: stdout: "e2e-test-crd-publish-openapi-4027-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 17 13:54:51.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2338 --namespace=crd-publish-openapi-2338 delete e2e-test-crd-publish-openapi-4027-crds test-cr' Apr 17 13:54:51.976: INFO: stderr: "" Apr 17 13:54:51.976: INFO: stdout: "e2e-test-crd-publish-openapi-4027-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 17 13:54:51.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2338 --namespace=crd-publish-openapi-2338 apply -f -' Apr 17 13:54:52.148: INFO: stderr: "" Apr 17 13:54:52.148: INFO: stdout: "e2e-test-crd-publish-openapi-4027-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 17 13:54:52.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2338 --namespace=crd-publish-openapi-2338 delete e2e-test-crd-publish-openapi-4027-crds test-cr' Apr 17 13:54:52.219: INFO: stderr: "" Apr 17 13:54:52.219: INFO: stdout: "e2e-test-crd-publish-openapi-4027-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Apr 17 13:54:52.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2338 explain e2e-test-crd-publish-openapi-4027-crds' Apr 17 13:54:52.393: INFO: stderr: "" Apr 17 13:54:52.393: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4027-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:54:54.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-2338" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":35,"skipped":708,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:53.540: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod test-webserver-5611217f-29a4-4a82-93dd-389b3ea38ace in namespace container-probe-4059 Apr 17 13:51:55.586: INFO: Started pod test-webserver-5611217f-29a4-4a82-93dd-389b3ea38ace in namespace container-probe-4059 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 17 13:51:55.589: INFO: Initial restart count of pod test-webserver-5611217f-29a4-4a82-93dd-389b3ea38ace is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:55:56.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-4059" for this suite. �[32m• [SLOW TEST:242.604 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":734,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:55:56.185: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:56:03.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1089" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":36,"skipped":762,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:56:03.324: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 17 13:56:03.396: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 17 13:56:03.405: INFO: waiting for watch events with expected annotations Apr 17 13:56:03.406: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:56:03.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-3950" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":37,"skipped":827,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:56:03.464: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-6520, will wait for the garbage collector to delete the pods Apr 17 13:56:05.564: INFO: Deleting Job.batch foo took: 4.949103ms Apr 17 13:56:05.664: INFO: Terminating Job.batch foo pods took: 100.646659ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:56:37.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-6520" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":38,"skipped":833,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:51:59.010: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-90.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-90.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-90.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-90.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 17 13:55:39.518: INFO: Unable to read wheezy_hosts@dns-querier-1.dns-test-service.dns-90.svc.cluster.local from pod dns-90/dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd: the server is currently unable to handle the request (get pods dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd) Apr 17 13:57:05.061: FAIL: Unable to read wheezy_hosts@dns-querier-1 from pod dns-90/dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-90/pods/dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7fcab8026ed0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x77ba0a8, 0xc000056080}, 0xc00257fb18) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x77ba0a8, 0xc000056080}, 0x58, 0x2bb9f85, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x77ba0a8, 0xc000056080}, 0x4a, 0xc00257fba8, 0x2378d47) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x76a2200, 0xc00005c880, 0xc00257fbf0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0040e9f80, 0x4, 0x4}, {0x6ecd939, 0x7}, 0xc00343ac00, {0x78eb710, 0xc003c4db00}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000e9b4a0, 0xc00343ac00, {0xc0040e9f80, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x4f2 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000232d00, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E0417 13:57:05.062544 19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Apr 17 13:57:05.062: Unable to read wheezy_hosts@dns-querier-1 from pod dns-90/dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-90/pods/dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd/proxy/results/wheezy_hosts@dns-querier-1\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:220, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7fcab8026ed0, 0x0})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x77ba0a8, 0xc000056080}, 0xc00257fb18)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x77ba0a8, 0xc000056080}, 0x58, 0x2bb9f85, 0x68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x77ba0a8, 0xc000056080}, 0x4a, 0xc00257fba8, 0x2378d47)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x76a2200, 0xc00005c880, 0xc00257fbf0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0040e9f80, 0x4, 0x4}, {0x6ecd939, 0x7}, 0xc00343ac00, {0x78eb710, 0xc003c4db00}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000e9b4a0, 0xc00343ac00, {0xc0040e9f80, 0x4, 0x4})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470\nk8s.io/kubernetes/test/e2e/network.glob..func2.4()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x4f2\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x2371919)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc000232d00, 0x71566f0)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 134 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6a38820, 0xc00520c0c0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00007c290}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6a38820, 0xc00520c0c0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x610baa0, 0x76987f0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc002ce0280, 0x12b}, {0xc00257f5b0, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002ce0280, 0x12b}, {0xc00257f690, 0x6ec4cca, 0xc00257f6b8}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Failf({0x6f7531e, 0x2d}, {0xc00257f900, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x131 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x889 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7fcab8026ed0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x77ba0a8, 0xc000056080}, 0xc00257fb18) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x77ba0a8, 0xc000056080}, 0x58, 0x2bb9f85, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x77ba0a8, 0xc000056080}, 0x4a, 0xc00257fba8, 0x2378d47) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x76a2200, 0xc00005c880, 0xc00257fbf0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0040e9f80, 0x4, 0x4}, {0x6ecd939, 0x7}, 0xc00343ac00, {0x78eb710, 0xc003c4db00}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000e9b4a0, 0xc00343ac00, {0xc0040e9f80, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:127 +0x4f2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000502b60) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0025815c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0020aae10, 0xc002581990, {0x76a2200, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0020aae10, {0x76a2200, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003098000, 0xc0020aae10) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003098000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003098000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00019a070, {0x7fcab8398d30, 0xc000232d00}, {0x6f04445, 0x40}, {0xc000d2bf20, 0x3, 0x3}, {0x7811bb8, 0xc00005c880}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x76a8840, 0xc000232d00}, {0x6f04445, 0x14}, {0xc000dacf40, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x76a8840, 0xc000232d00}, {0x6f04445, 0x14}, {0xc00063f780, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000232d00, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:05.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-90" for this suite. �[91m�[1m• Failure [306.079 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 17 13:57:05.062: Unable to read wheezy_hosts@dns-querier-1 from pod dns-90/dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-90/pods/dns-test-fccfe474-e430-4ed6-ae6d-feb028c275fd/proxy/results/wheezy_hosts@dns-querier-1": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:56:37.931: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: referencing a single matching pod �[1mSTEP�[0m: referencing matching pods with named port �[1mSTEP�[0m: creating empty Endpoints and EndpointSlices for no matching Pods �[1mSTEP�[0m: recreating EndpointSlices after they've been deleted Apr 17 13:56:58.065: INFO: EndpointSlice for Service endpointslice-7338/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:08.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-7338" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":39,"skipped":870,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:08.139: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-1872d061-8a27-4f68-b311-7e32314cb5f3 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:57:08.176: INFO: Waiting up to 5m0s for pod "pod-configmaps-973e5a67-3066-43f1-ac6b-39595b19fe0b" in namespace "configmap-4955" to be "Succeeded or Failed" Apr 17 13:57:08.179: INFO: Pod "pod-configmaps-973e5a67-3066-43f1-ac6b-39595b19fe0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21398ms Apr 17 13:57:10.183: INFO: Pod "pod-configmaps-973e5a67-3066-43f1-ac6b-39595b19fe0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006052827s �[1mSTEP�[0m: Saw pod success Apr 17 13:57:10.183: INFO: Pod "pod-configmaps-973e5a67-3066-43f1-ac6b-39595b19fe0b" satisfied condition "Succeeded or Failed" Apr 17 13:57:10.185: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod pod-configmaps-973e5a67-3066-43f1-ac6b-39595b19fe0b container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:57:10.208: INFO: Waiting for pod pod-configmaps-973e5a67-3066-43f1-ac6b-39595b19fe0b to disappear Apr 17 13:57:10.211: INFO: Pod pod-configmaps-973e5a67-3066-43f1-ac6b-39595b19fe0b no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:10.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4955" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":920,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:54:54.565: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Apr 17 13:56:55.127: INFO: Successfully updated pod "var-expansion-715741d3-46d9-4031-98e3-d6901384dcc8" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Apr 17 13:56:57.133: INFO: Deleting pod "var-expansion-715741d3-46d9-4031-98e3-d6901384dcc8" in namespace "var-expansion-8126" Apr 17 13:56:57.139: INFO: Wait up to 5m0s for pod "var-expansion-715741d3-46d9-4031-98e3-d6901384dcc8" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:29.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-8126" for this suite. �[32m• [SLOW TEST:154.594 seconds]�[0m [sig-node] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":36,"skipped":717,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:29.164: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:57:29.553: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:57:32.573: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource definition that should be denied by the webhook Apr 17 13:57:32.590: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:32.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9006" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9006-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":37,"skipped":720,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:32.785: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting the auto-created API token Apr 17 13:57:33.325: INFO: created pod pod-service-account-defaultsa Apr 17 13:57:33.326: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 17 13:57:33.330: INFO: created pod pod-service-account-mountsa Apr 17 13:57:33.330: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 17 13:57:33.336: INFO: created pod pod-service-account-nomountsa Apr 17 13:57:33.336: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 17 13:57:33.340: INFO: created pod pod-service-account-defaultsa-mountspec Apr 17 13:57:33.340: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 17 13:57:33.344: INFO: created pod pod-service-account-mountsa-mountspec Apr 17 13:57:33.344: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 17 13:57:33.351: INFO: created pod pod-service-account-nomountsa-mountspec Apr 17 13:57:33.351: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 17 13:57:33.360: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 17 13:57:33.360: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 17 13:57:33.367: INFO: created pod pod-service-account-mountsa-nomountspec Apr 17 13:57:33.367: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 17 13:57:33.395: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 17 13:57:33.395: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:33.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-7090" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":38,"skipped":789,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:33.445: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:57:34.013: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:57:37.034: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 13:57:37.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-1272-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:40.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4042" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4042-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":39,"skipped":803,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:40.231: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1409 �[1mSTEP�[0m: creating an pod Apr 17 13:57:40.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 17 13:57:40.389: INFO: stderr: "" Apr 17 13:57:40.389: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for log generator to start. Apr 17 13:57:40.390: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 17 13:57:40.390: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8071" to be "running and ready, or succeeded" Apr 17 13:57:40.393: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.912528ms Apr 17 13:57:42.397: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.007149445s Apr 17 13:57:42.397: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 17 13:57:42.397: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Apr 17 13:57:42.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 logs logs-generator logs-generator' Apr 17 13:57:42.492: INFO: stderr: "" Apr 17 13:57:42.492: INFO: stdout: "I0417 13:57:41.010952 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/kc8p 320\nI0417 13:57:41.211005 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/vwxp 201\nI0417 13:57:41.411936 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/5lmw 334\nI0417 13:57:41.611321 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/zrp 283\nI0417 13:57:41.811724 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/r2d 562\nI0417 13:57:42.011001 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/5frp 220\nI0417 13:57:42.211377 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/qddb 267\nI0417 13:57:42.411766 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/tn6s 315\n" �[1mSTEP�[0m: limiting log lines Apr 17 13:57:42.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 logs logs-generator logs-generator --tail=1' Apr 17 13:57:42.579: INFO: stderr: "" Apr 17 13:57:42.579: INFO: stdout: "I0417 13:57:42.411766 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/tn6s 315\n" Apr 17 13:57:42.579: INFO: got output "I0417 13:57:42.411766 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/tn6s 315\n" �[1mSTEP�[0m: limiting log bytes Apr 17 13:57:42.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 logs logs-generator logs-generator --limit-bytes=1' Apr 17 13:57:42.659: INFO: stderr: "" Apr 17 13:57:42.659: INFO: stdout: "I" Apr 17 13:57:42.659: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Apr 17 13:57:42.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 logs logs-generator logs-generator --tail=1 --timestamps' Apr 17 13:57:42.750: INFO: stderr: "" Apr 17 13:57:42.750: INFO: stdout: "2022-04-17T13:57:42.611189311Z I0417 13:57:42.611047 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/njr 551\n" Apr 17 13:57:42.750: INFO: got output "2022-04-17T13:57:42.611189311Z I0417 13:57:42.611047 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/njr 551\n" �[1mSTEP�[0m: restricting to a time range Apr 17 13:57:45.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 logs logs-generator logs-generator --since=1s' Apr 17 13:57:45.349: INFO: stderr: "" Apr 17 13:57:45.349: INFO: stdout: "I0417 13:57:44.411106 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/cch 408\nI0417 13:57:44.611798 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/l8bd 358\nI0417 13:57:44.811029 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/pzct 366\nI0417 13:57:45.011392 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/crz 438\nI0417 13:57:45.211752 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/cxp 330\n" Apr 17 13:57:45.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 logs logs-generator logs-generator --since=24h' Apr 17 13:57:45.437: INFO: stderr: "" Apr 17 13:57:45.437: INFO: stdout: "I0417 13:57:41.010952 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/kc8p 320\nI0417 13:57:41.211005 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/vwxp 201\nI0417 13:57:41.411936 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/5lmw 334\nI0417 13:57:41.611321 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/zrp 283\nI0417 13:57:41.811724 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/r2d 562\nI0417 13:57:42.011001 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/5frp 220\nI0417 13:57:42.211377 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/qddb 267\nI0417 13:57:42.411766 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/tn6s 315\nI0417 13:57:42.611047 1 logs_generator.go:76] 8 POST /api/v1/namespaces/kube-system/pods/njr 551\nI0417 13:57:42.811428 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/cpw 465\nI0417 13:57:43.011732 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/hvlb 369\nI0417 13:57:43.211010 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/pq8d 574\nI0417 13:57:43.411350 1 logs_generator.go:76] 12 GET /api/v1/namespaces/default/pods/d8v 219\nI0417 13:57:43.611809 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/nzs 408\nI0417 13:57:43.811043 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/wvkx 375\nI0417 13:57:44.011433 1 logs_generator.go:76] 15 GET /api/v1/namespaces/default/pods/hphg 301\nI0417 13:57:44.211798 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/x2t6 325\nI0417 13:57:44.411106 1 logs_generator.go:76] 17 GET /api/v1/namespaces/kube-system/pods/cch 408\nI0417 13:57:44.611798 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/l8bd 358\nI0417 13:57:44.811029 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/pzct 366\nI0417 13:57:45.011392 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/crz 438\nI0417 13:57:45.211752 1 logs_generator.go:76] 21 POST /api/v1/namespaces/ns/pods/cxp 330\nI0417 13:57:45.411052 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/8pn 449\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 Apr 17 13:57:45.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8071 delete pod logs-generator' Apr 17 13:57:46.331: INFO: stderr: "" Apr 17 13:57:46.331: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:46.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8071" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":40,"skipped":824,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:46.371: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-952d9733-f6f0-4351-9ae0-cba70bc43aba �[1mSTEP�[0m: Creating the pod Apr 17 13:57:46.416: INFO: The status of Pod pod-projected-configmaps-1b468891-2be5-47db-82c7-20fa031255d8 is Pending, waiting for it to be Running (with Ready = true) Apr 17 13:57:48.422: INFO: The status of Pod pod-projected-configmaps-1b468891-2be5-47db-82c7-20fa031255d8 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-952d9733-f6f0-4351-9ae0-cba70bc43aba �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:57:50.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5474" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":848,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:50.532: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod liveness-baccac34-ec16-4a19-89f3-4d881f688aac in namespace container-probe-3160 Apr 17 13:57:54.576: INFO: Started pod liveness-baccac34-ec16-4a19-89f3-4d881f688aac in namespace container-probe-3160 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 17 13:57:54.580: INFO: Initial restart count of pod liveness-baccac34-ec16-4a19-89f3-4d881f688aac is 0 Apr 17 13:58:12.623: INFO: Restart count of pod container-probe-3160/liveness-baccac34-ec16-4a19-89f3-4d881f688aac is now 1 (18.04332554s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:12.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3160" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":891,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:57:10.249: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-9592 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a new StatefulSet Apr 17 13:57:10.294: INFO: Found 0 stateful pods, waiting for 3 Apr 17 13:57:20.302: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 17 13:57:20.302: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 17 13:57:20.302: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 17 13:57:20.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9592 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 13:57:20.479: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 17 13:57:20.479: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 13:57:20.479: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Apr 17 13:57:30.517: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Apr 17 13:57:40.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9592 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 13:57:40.712: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 17 13:57:40.712: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 13:57:40.712: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' �[1mSTEP�[0m: Rolling back to a previous revision Apr 17 13:58:00.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9592 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 17 13:58:00.886: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 17 13:58:00.886: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 17 13:58:00.886: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 17 13:58:10.919: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Rolling back update in reverse ordinal order Apr 17 13:58:20.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9592 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 17 13:58:21.087: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 17 13:58:21.087: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 17 13:58:21.087: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 17 13:58:31.109: INFO: Deleting all statefulset in ns statefulset-9592 Apr 17 13:58:31.111: INFO: Scaling statefulset ss2 to 0 Apr 17 13:58:41.127: INFO: Waiting for statefulset status.replicas updated to 0 Apr 17 13:58:41.130: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:41.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-9592" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":41,"skipped":942,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:41.166: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test env composition Apr 17 13:58:41.214: INFO: Waiting up to 5m0s for pod "var-expansion-2c1b6bb0-c659-4ae3-81a6-a5af7cd41efe" in namespace "var-expansion-9078" to be "Succeeded or Failed" Apr 17 13:58:41.218: INFO: Pod "var-expansion-2c1b6bb0-c659-4ae3-81a6-a5af7cd41efe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.172568ms Apr 17 13:58:43.223: INFO: Pod "var-expansion-2c1b6bb0-c659-4ae3-81a6-a5af7cd41efe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009050659s �[1mSTEP�[0m: Saw pod success Apr 17 13:58:43.223: INFO: Pod "var-expansion-2c1b6bb0-c659-4ae3-81a6-a5af7cd41efe" satisfied condition "Succeeded or Failed" Apr 17 13:58:43.226: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod var-expansion-2c1b6bb0-c659-4ae3-81a6-a5af7cd41efe container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:58:43.248: INFO: Waiting for pod var-expansion-2c1b6bb0-c659-4ae3-81a6-a5af7cd41efe to disappear Apr 17 13:58:43.250: INFO: Pod var-expansion-2c1b6bb0-c659-4ae3-81a6-a5af7cd41efe no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:43.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-9078" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":945,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:43.271: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:58:44.177: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:58:47.198: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:47.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4619" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4619-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":43,"skipped":953,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:12.679: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5892 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5892;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5892 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5892;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5892.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5892.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5892.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5892.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5892.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5892.svc;check="$$(dig +notcp +noall +answer +search 70.22.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.22.70_udp@PTR;check="$$(dig +tcp +noall +answer +search 70.22.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.22.70_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5892 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5892;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5892 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5892;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5892.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5892.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5892.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5892.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5892.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5892.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5892.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5892.svc;check="$$(dig +notcp +noall +answer +search 70.22.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.22.70_udp@PTR;check="$$(dig +tcp +noall +answer +search 70.22.140.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.140.22.70_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 17 13:58:14.762: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.766: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.770: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.773: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.776: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.779: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.781: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.784: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.799: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.802: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.805: INFO: Unable to read jessie_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.809: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.814: INFO: Unable to read jessie_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.817: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.820: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.824: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:14.837: INFO: Lookups using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5892 wheezy_tcp@dns-test-service.dns-5892 wheezy_udp@dns-test-service.dns-5892.svc wheezy_tcp@dns-test-service.dns-5892.svc wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5892 jessie_tcp@dns-test-service.dns-5892 jessie_udp@dns-test-service.dns-5892.svc jessie_tcp@dns-test-service.dns-5892.svc jessie_udp@_http._tcp.dns-test-service.dns-5892.svc jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc] Apr 17 13:58:19.844: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.847: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.850: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.853: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.855: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.858: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.860: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.864: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.878: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.881: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.884: INFO: Unable to read jessie_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.887: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.889: INFO: Unable to read jessie_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.891: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.894: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.896: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:19.907: INFO: Lookups using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5892 wheezy_tcp@dns-test-service.dns-5892 wheezy_udp@dns-test-service.dns-5892.svc wheezy_tcp@dns-test-service.dns-5892.svc wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5892 jessie_tcp@dns-test-service.dns-5892 jessie_udp@dns-test-service.dns-5892.svc jessie_tcp@dns-test-service.dns-5892.svc jessie_udp@_http._tcp.dns-test-service.dns-5892.svc jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc] Apr 17 13:58:24.844: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.857: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.867: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.872: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.876: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.882: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.886: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.899: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.902: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.905: INFO: Unable to read jessie_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.907: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.910: INFO: Unable to read jessie_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.912: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.915: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.917: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:24.930: INFO: Lookups using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5892 wheezy_tcp@dns-test-service.dns-5892 wheezy_udp@dns-test-service.dns-5892.svc wheezy_tcp@dns-test-service.dns-5892.svc wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5892 jessie_tcp@dns-test-service.dns-5892 jessie_udp@dns-test-service.dns-5892.svc jessie_tcp@dns-test-service.dns-5892.svc jessie_udp@_http._tcp.dns-test-service.dns-5892.svc jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc] Apr 17 13:58:29.843: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.846: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.851: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.853: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.856: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.858: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.861: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.875: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.878: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.880: INFO: Unable to read jessie_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.882: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.885: INFO: Unable to read jessie_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.888: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.890: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.892: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:29.902: INFO: Lookups using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5892 wheezy_tcp@dns-test-service.dns-5892 wheezy_udp@dns-test-service.dns-5892.svc wheezy_tcp@dns-test-service.dns-5892.svc wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5892 jessie_tcp@dns-test-service.dns-5892 jessie_udp@dns-test-service.dns-5892.svc jessie_tcp@dns-test-service.dns-5892.svc jessie_udp@_http._tcp.dns-test-service.dns-5892.svc jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc] Apr 17 13:58:34.842: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.846: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.850: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.853: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.856: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.859: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.861: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.864: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.879: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.882: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.885: INFO: Unable to read jessie_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.887: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.890: INFO: Unable to read jessie_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.892: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.895: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.897: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:34.930: INFO: Lookups using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5892 wheezy_tcp@dns-test-service.dns-5892 wheezy_udp@dns-test-service.dns-5892.svc wheezy_tcp@dns-test-service.dns-5892.svc wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5892 jessie_tcp@dns-test-service.dns-5892 jessie_udp@dns-test-service.dns-5892.svc jessie_tcp@dns-test-service.dns-5892.svc jessie_udp@_http._tcp.dns-test-service.dns-5892.svc jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc] Apr 17 13:58:39.842: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.846: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.849: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.852: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.855: INFO: Unable to read wheezy_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.858: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.861: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.865: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.881: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.884: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.887: INFO: Unable to read jessie_udp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.890: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892 from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.892: INFO: Unable to read jessie_udp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.894: INFO: Unable to read jessie_tcp@dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.897: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.900: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:39.912: INFO: Lookups using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5892 wheezy_tcp@dns-test-service.dns-5892 wheezy_udp@dns-test-service.dns-5892.svc wheezy_tcp@dns-test-service.dns-5892.svc wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5892.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5892 jessie_tcp@dns-test-service.dns-5892 jessie_udp@dns-test-service.dns-5892.svc jessie_tcp@dns-test-service.dns-5892.svc jessie_udp@_http._tcp.dns-test-service.dns-5892.svc jessie_tcp@_http._tcp.dns-test-service.dns-5892.svc] Apr 17 13:58:44.862: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc from pod dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2: the server could not find the requested resource (get pods dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2) Apr 17 13:58:44.931: INFO: Lookups using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 failed for: [wheezy_udp@_http._tcp.dns-test-service.dns-5892.svc] Apr 17 13:58:49.901: INFO: DNS probes using dns-5892/dns-test-15f476ab-97f8-4c1c-af04-0493b029efc2 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:49.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5892" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":923,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:47.329: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 17 13:58:47.658: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 17 13:58:50.682: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:51.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9579" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9579-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":44,"skipped":957,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:49.981: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Apr 17 13:58:50.017: INFO: Waiting up to 5m0s for pod "pod-67920e64-6cab-4afc-92a4-9307178d7bbd" in namespace "emptydir-3255" to be "Succeeded or Failed" Apr 17 13:58:50.024: INFO: Pod "pod-67920e64-6cab-4afc-92a4-9307178d7bbd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.735227ms Apr 17 13:58:52.029: INFO: Pod "pod-67920e64-6cab-4afc-92a4-9307178d7bbd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010879695s �[1mSTEP�[0m: Saw pod success Apr 17 13:58:52.029: INFO: Pod "pod-67920e64-6cab-4afc-92a4-9307178d7bbd" satisfied condition "Succeeded or Failed" Apr 17 13:58:52.031: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod pod-67920e64-6cab-4afc-92a4-9307178d7bbd container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:58:52.043: INFO: Waiting for pod pod-67920e64-6cab-4afc-92a4-9307178d7bbd to disappear Apr 17 13:58:52.046: INFO: Pod pod-67920e64-6cab-4afc-92a4-9307178d7bbd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:52.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3255" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":942,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:51.868: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test override all Apr 17 13:58:51.903: INFO: Waiting up to 5m0s for pod "client-containers-8c04cbfc-b0e5-49ec-ba33-c3a1bd3922b0" in namespace "containers-6881" to be "Succeeded or Failed" Apr 17 13:58:51.906: INFO: Pod "client-containers-8c04cbfc-b0e5-49ec-ba33-c3a1bd3922b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.532332ms Apr 17 13:58:53.910: INFO: Pod "client-containers-8c04cbfc-b0e5-49ec-ba33-c3a1bd3922b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006693045s �[1mSTEP�[0m: Saw pod success Apr 17 13:58:53.910: INFO: Pod "client-containers-8c04cbfc-b0e5-49ec-ba33-c3a1bd3922b0" satisfied condition "Succeeded or Failed" Apr 17 13:58:53.912: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod client-containers-8c04cbfc-b0e5-49ec-ba33-c3a1bd3922b0 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:58:53.924: INFO: Waiting for pod client-containers-8c04cbfc-b0e5-49ec-ba33-c3a1bd3922b0 to disappear Apr 17 13:58:53.927: INFO: Pod client-containers-8c04cbfc-b0e5-49ec-ba33-c3a1bd3922b0 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:58:53.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-6881" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":991,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:53.940: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create set of pods Apr 17 13:58:53.974: INFO: created test-pod-1 Apr 17 13:58:55.981: INFO: running and ready test-pod-1 Apr 17 13:58:55.985: INFO: created test-pod-2 Apr 17 13:58:57.993: INFO: running and ready test-pod-2 Apr 17 13:58:57.997: INFO: created test-pod-3 Apr 17 13:59:00.003: INFO: running and ready test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be located �[1mSTEP�[0m: waiting for all pods to be deleted Apr 17 13:59:00.029: INFO: Pod quantity 3 is different from expected quantity 0 Apr 17 13:59:01.033: INFO: Pod quantity 2 is different from expected quantity 0 Apr 17 13:59:02.032: INFO: Pod quantity 1 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:03.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7414" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":46,"skipped":995,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:03.122: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 17 13:59:03.160: INFO: Waiting up to 5m0s for pod "security-context-bd44aa02-f89c-4789-ad18-6bdc22d3cde4" in namespace "security-context-2396" to be "Succeeded or Failed" Apr 17 13:59:03.163: INFO: Pod "security-context-bd44aa02-f89c-4789-ad18-6bdc22d3cde4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451496ms Apr 17 13:59:05.166: INFO: Pod "security-context-bd44aa02-f89c-4789-ad18-6bdc22d3cde4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006227042s �[1mSTEP�[0m: Saw pod success Apr 17 13:59:05.166: INFO: Pod "security-context-bd44aa02-f89c-4789-ad18-6bdc22d3cde4" satisfied condition "Succeeded or Failed" Apr 17 13:59:05.169: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod security-context-bd44aa02-f89c-4789-ad18-6bdc22d3cde4 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:59:05.193: INFO: Waiting for pod security-context-bd44aa02-f89c-4789-ad18-6bdc22d3cde4 to disappear Apr 17 13:59:05.196: INFO: Pod security-context-bd44aa02-f89c-4789-ad18-6bdc22d3cde4 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:05.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-2396" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":47,"skipped":1047,"failed":4,"failures":["[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:58:52.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-kz48 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 17 13:58:52.159: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-kz48" in namespace "subpath-7413" to be "Succeeded or Failed" Apr 17 13:58:52.161: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.277412ms Apr 17 13:58:54.165: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 2.006133005s Apr 17 13:58:56.175: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 4.015426075s Apr 17 13:58:58.179: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 6.019462417s Apr 17 13:59:00.184: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 8.024464643s Apr 17 13:59:02.187: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 10.02783787s Apr 17 13:59:04.191: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 12.03214949s Apr 17 13:59:06.196: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 14.036454862s Apr 17 13:59:08.200: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 16.041103888s Apr 17 13:59:10.205: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 18.045412286s Apr 17 13:59:12.210: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Running", Reason="", readiness=true. Elapsed: 20.050876151s Apr 17 13:59:14.215: INFO: Pod "pod-subpath-test-projected-kz48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.055869661s �[1mSTEP�[0m: Saw pod success Apr 17 13:59:14.215: INFO: Pod "pod-subpath-test-projected-kz48" satisfied condition "Succeeded or Failed" Apr 17 13:59:14.218: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod pod-subpath-test-projected-kz48 container test-container-subpath-projected-kz48: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:59:14.233: INFO: Waiting for pod pod-subpath-test-projected-kz48 to disappear Apr 17 13:59:14.236: INFO: Pod pod-subpath-test-projected-kz48 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-kz48 Apr 17 13:59:14.236: INFO: Deleting pod "pod-subpath-test-projected-kz48" in namespace "subpath-7413" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:14.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-7413" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":45,"skipped":994,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:14.286: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-b568e89b-3203-4bcf-81ec-e723edf8174b �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:59:14.336: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f3d4a8c2-833e-4833-82cd-ad5e52d2cfd1" in namespace "projected-3947" to be "Succeeded or Failed" Apr 17 13:59:14.339: INFO: Pod "pod-projected-configmaps-f3d4a8c2-833e-4833-82cd-ad5e52d2cfd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.529465ms Apr 17 13:59:16.344: INFO: Pod "pod-projected-configmaps-f3d4a8c2-833e-4833-82cd-ad5e52d2cfd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007591913s �[1mSTEP�[0m: Saw pod success Apr 17 13:59:16.344: INFO: Pod "pod-projected-configmaps-f3d4a8c2-833e-4833-82cd-ad5e52d2cfd1" satisfied condition "Succeeded or Failed" Apr 17 13:59:16.347: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-w8x9n pod pod-projected-configmaps-f3d4a8c2-833e-4833-82cd-ad5e52d2cfd1 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:59:16.360: INFO: Waiting for pod pod-projected-configmaps-f3d4a8c2-833e-4833-82cd-ad5e52d2cfd1 to disappear Apr 17 13:59:16.363: INFO: Pod pod-projected-configmaps-f3d4a8c2-833e-4833-82cd-ad5e52d2cfd1 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:16.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3947" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":1017,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:16.471: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:149 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:16.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8408" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":47,"skipped":1082,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:16.550: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:16.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6501" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":48,"skipped":1089,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:16.629: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 17 13:59:16.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5468 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Apr 17 13:59:16.736: INFO: stderr: "" Apr 17 13:59:16.736: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Apr 17 13:59:16.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5468 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Apr 17 13:59:17.667: INFO: stderr: "" Apr 17 13:59:17.667: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 17 13:59:17.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5468 delete pods e2e-test-httpd-pod' Apr 17 13:59:19.238: INFO: stderr: "" Apr 17 13:59:19.239: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:19.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5468" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":49,"skipped":1091,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:19.289: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap configmap-7813/configmap-test-de6e5647-3212-4685-b92a-dca4cf3bffcf �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 17 13:59:19.324: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0653f49-2e0d-4940-98c0-d4267229462f" in namespace "configmap-7813" to be "Succeeded or Failed" Apr 17 13:59:19.327: INFO: Pod "pod-configmaps-f0653f49-2e0d-4940-98c0-d4267229462f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.092311ms Apr 17 13:59:21.331: INFO: Pod "pod-configmaps-f0653f49-2e0d-4940-98c0-d4267229462f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007318891s �[1mSTEP�[0m: Saw pod success Apr 17 13:59:21.331: INFO: Pod "pod-configmaps-f0653f49-2e0d-4940-98c0-d4267229462f" satisfied condition "Succeeded or Failed" Apr 17 13:59:21.334: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-configmaps-f0653f49-2e0d-4940-98c0-d4267229462f container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:59:21.348: INFO: Waiting for pod pod-configmaps-f0653f49-2e0d-4940-98c0-d4267229462f to disappear Apr 17 13:59:21.351: INFO: Pod pod-configmaps-f0653f49-2e0d-4940-98c0-d4267229462f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:21.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7813" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1122,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:21.388: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:59:21.421: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3273915-9c5d-4248-9e5e-b41355bd6fbf" in namespace "projected-34" to be "Succeeded or Failed" Apr 17 13:59:21.425: INFO: Pod "downwardapi-volume-f3273915-9c5d-4248-9e5e-b41355bd6fbf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.425115ms Apr 17 13:59:23.429: INFO: Pod "downwardapi-volume-f3273915-9c5d-4248-9e5e-b41355bd6fbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007095643s �[1mSTEP�[0m: Saw pod success Apr 17 13:59:23.429: INFO: Pod "downwardapi-volume-f3273915-9c5d-4248-9e5e-b41355bd6fbf" satisfied condition "Succeeded or Failed" Apr 17 13:59:23.431: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod downwardapi-volume-f3273915-9c5d-4248-9e5e-b41355bd6fbf container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:59:23.445: INFO: Waiting for pod downwardapi-volume-f3273915-9c5d-4248-9e5e-b41355bd6fbf to disappear Apr 17 13:59:23.447: INFO: Pod downwardapi-volume-f3273915-9c5d-4248-9e5e-b41355bd6fbf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:23.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-34" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":1140,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:23.535: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 17 13:59:23.570: INFO: Waiting up to 5m0s for pod "downwardapi-volume-520a6426-457c-47c6-a433-945f8227ab7d" in namespace "downward-api-7240" to be "Succeeded or Failed" Apr 17 13:59:23.574: INFO: Pod "downwardapi-volume-520a6426-457c-47c6-a433-945f8227ab7d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.299021ms Apr 17 13:59:25.579: INFO: Pod "downwardapi-volume-520a6426-457c-47c6-a433-945f8227ab7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007628043s �[1mSTEP�[0m: Saw pod success Apr 17 13:59:25.579: INFO: Pod "downwardapi-volume-520a6426-457c-47c6-a433-945f8227ab7d" satisfied condition "Succeeded or Failed" Apr 17 13:59:25.581: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-worker-gh8fj4 pod downwardapi-volume-520a6426-457c-47c6-a433-945f8227ab7d container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 17 13:59:25.595: INFO: Waiting for pod downwardapi-volume-520a6426-457c-47c6-a433-945f8227ab7d to disappear Apr 17 13:59:25.597: INFO: Pod downwardapi-volume-520a6426-457c-47c6-a433-945f8227ab7d no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:25.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7240" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":1207,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:25.617: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-5132 �[1mSTEP�[0m: creating replication controller nodeport-test in namespace services-5132 I0417 13:59:25.666879 15 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-5132, replica count: 2 I0417 13:59:28.719328 15 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 17 13:59:28.719: INFO: Creating new exec pod Apr 17 13:59:31.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5132 exec execpodxr9wg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Apr 17 13:59:31.876: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Apr 17 13:59:31.876: INFO: stdout: "nodeport-test-s5nh9" Apr 17 13:59:31.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5132 exec execpodxr9wg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.29.118 80' Apr 17 13:59:32.013: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.29.118 80\nConnection to 10.133.29.118 80 port [tcp/http] succeeded!\n" Apr 17 13:59:32.013: INFO: stdout: "" Apr 17 13:59:33.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5132 exec execpodxr9wg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.29.118 80' Apr 17 13:59:33.152: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.29.118 80\nConnection to 10.133.29.118 80 port [tcp/http] succeeded!\n" Apr 17 13:59:33.153: INFO: stdout: "nodeport-test-hv5km" Apr 17 13:59:33.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5132 exec execpodxr9wg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 30051' Apr 17 13:59:33.296: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 30051\nConnection to 172.18.0.6 30051 port [tcp/*] succeeded!\n" Apr 17 13:59:33.296: INFO: stdout: "nodeport-test-s5nh9" Apr 17 13:59:33.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5132 exec execpodxr9wg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 30051' Apr 17 13:59:33.429: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 30051\nConnection to 172.18.0.5 30051 port [tcp/*] succeeded!\n" Apr 17 13:59:33.429: INFO: stdout: "nodeport-test-s5nh9" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:33.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5132" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":53,"skipped":1216,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:33.445: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Apr 17 13:59:33.496: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-616 9c43c217-7fb0-414e-8914-948b74ed6a3c 16024 0 2022-04-17 13:59:33 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-17 13:59:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 17 13:59:33.496: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-616 9c43c217-7fb0-414e-8914-948b74ed6a3c 16025 0 2022-04-17 13:59:33 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2022-04-17 13:59:33 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 13:59:33.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-616" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":54,"skipped":1218,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 13:59:33.538: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Apr 17 14:00:13.658: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-4exvhp-control-plane-ss4pf is Running (Ready = true) Apr 17 14:00:13.775: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 17 14:00:13.775: INFO: Deleting pod "simpletest.rc-2gr6x" in namespace "gc-2634" Apr 17 14:00:13.783: INFO: Deleting pod "simpletest.rc-2lzhk" in namespace "gc-2634" Apr 17 14:00:13.794: INFO: Deleting pod "simpletest.rc-2pp8m" in namespace "gc-2634" Apr 17 14:00:13.804: INFO: Deleting pod "simpletest.rc-42w29" in namespace "gc-2634" Apr 17 14:00:13.811: INFO: Deleting pod "simpletest.rc-4d2xg" in namespace "gc-2634" Apr 17 14:00:13.821: INFO: Deleting pod "simpletest.rc-4p7kh" in namespace "gc-2634" Apr 17 14:00:13.831: INFO: Deleting pod "simpletest.rc-4vf9z" in namespace "gc-2634" Apr 17 14:00:13.846: INFO: Deleting pod "simpletest.rc-5kg98" in namespace "gc-2634" Apr 17 14:00:13.855: INFO: Deleting pod "simpletest.rc-5qwrp" in namespace "gc-2634" Apr 17 14:00:13.869: INFO: Deleting pod "simpletest.rc-5vf59" in namespace "gc-2634" Apr 17 14:00:13.889: INFO: Deleting pod "simpletest.rc-62lqx" in namespace "gc-2634" Apr 17 14:00:13.902: INFO: Deleting pod "simpletest.rc-6shhb" in namespace "gc-2634" Apr 17 14:00:13.926: INFO: Deleting pod "simpletest.rc-76jjj" in namespace "gc-2634" Apr 17 14:00:13.950: INFO: Deleting pod "simpletest.rc-7hm9b" in namespace "gc-2634" Apr 17 14:00:13.969: INFO: Deleting pod "simpletest.rc-7sh4p" in namespace "gc-2634" Apr 17 14:00:14.004: INFO: Deleting pod "simpletest.rc-7wdpm" in namespace "gc-2634" Apr 17 14:00:14.046: INFO: Deleting pod "simpletest.rc-7x6sz" in namespace "gc-2634" Apr 17 14:00:14.085: INFO: Deleting pod "simpletest.rc-85lt5" in namespace "gc-2634" Apr 17 14:00:14.137: INFO: Deleting pod "simpletest.rc-85n7t" in namespace "gc-2634" Apr 17 14:00:14.191: INFO: Deleting pod "simpletest.rc-8f6l9" in namespace "gc-2634" Apr 17 14:00:14.213: INFO: Deleting pod "simpletest.rc-9ftzc" in namespace "gc-2634" Apr 17 14:00:14.242: INFO: Deleting pod "simpletest.rc-9sr86" in namespace "gc-2634" Apr 17 14:00:14.262: INFO: Deleting pod "simpletest.rc-9t4lb" in namespace "gc-2634" Apr 17 14:00:14.298: INFO: Deleting pod "simpletest.rc-9wc24" in namespace "gc-2634" Apr 17 14:00:14.324: INFO: Deleting pod "simpletest.rc-bd7b5" in namespace "gc-2634" Apr 17 14:00:14.388: INFO: Deleting pod "simpletest.rc-bdcpq" in namespace "gc-2634" Apr 17 14:00:14.441: INFO: Deleting pod "simpletest.rc-c6cq2" in namespace "gc-2634" Apr 17 14:00:14.473: INFO: Deleting pod "simpletest.rc-cmbfg" in namespace "gc-2634" Apr 17 14:00:14.489: INFO: Deleting pod "simpletest.rc-d2787" in namespace "gc-2634" Apr 17 14:00:14.509: INFO: Deleting pod "simpletest.rc-d647k" in namespace "gc-2634" Apr 17 14:00:14.526: INFO: Deleting pod "simpletest.rc-d8c4m" in namespace "gc-2634" Apr 17 14:00:14.538: INFO: Deleting pod "simpletest.rc-dbs8t" in namespace "gc-2634" Apr 17 14:00:14.575: INFO: Deleting pod "simpletest.rc-drzwr" in namespace "gc-2634" Apr 17 14:00:14.615: INFO: Deleting pod "simpletest.rc-ds9nd" in namespace "gc-2634" Apr 17 14:00:14.629: INFO: Deleting pod "simpletest.rc-f64mx" in namespace "gc-2634" Apr 17 14:00:14.657: INFO: Deleting pod "simpletest.rc-fdkrr" in namespace "gc-2634" Apr 17 14:00:14.669: INFO: Deleting pod "simpletest.rc-fkpkg" in namespace "gc-2634" Apr 17 14:00:14.687: INFO: Deleting pod "simpletest.rc-fpsqf" in namespace "gc-2634" Apr 17 14:00:14.724: INFO: Deleting pod "simpletest.rc-g5rdz" in namespace "gc-2634" Apr 17 14:00:14.750: INFO: Deleting pod "simpletest.rc-gl7z4" in namespace "gc-2634" Apr 17 14:00:14.778: INFO: Deleting pod "simpletest.rc-gmvjd" in namespace "gc-2634" Apr 17 14:00:14.802: INFO: Deleting pod "simpletest.rc-gtztr" in namespace "gc-2634" Apr 17 14:00:14.821: INFO: Deleting pod "simpletest.rc-h87bs" in namespace "gc-2634" Apr 17 14:00:14.858: INFO: Deleting pod "simpletest.rc-hcq9p" in namespace "gc-2634" Apr 17 14:00:14.874: INFO: Deleting pod "simpletest.rc-hdb29" in namespace "gc-2634" Apr 17 14:00:14.911: INFO: Deleting pod "simpletest.rc-hfbh2" in namespace "gc-2634" Apr 17 14:00:14.937: INFO: Deleting pod "simpletest.rc-j2kl5" in namespace "gc-2634" Apr 17 14:00:15.041: INFO: Deleting pod "simpletest.rc-j7bs6" in namespace "gc-2634" Apr 17 14:00:15.092: INFO: Deleting pod "simpletest.rc-jdbcg" in namespace "gc-2634" Apr 17 14:00:15.182: INFO: Deleting pod "simpletest.rc-k7xp2" in namespace "gc-2634" Apr 17 14:00:15.228: INFO: Deleting pod "simpletest.rc-kk465" in namespace "gc-2634" Apr 17 14:00:15.271: INFO: Deleting pod "simpletest.rc-kqs4m" in namespace "gc-2634" Apr 17 14:00:15.303: INFO: Deleting pod "simpletest.rc-l6259" in namespace "gc-2634" Apr 17 14:00:15.325: INFO: Deleting pod "simpletest.rc-ln7fl" in namespace "gc-2634" Apr 17 14:00:15.382: INFO: Deleting pod "simpletest.rc-m4hrp" in namespace "gc-2634" Apr 17 14:00:15.439: INFO: Deleting pod "simpletest.rc-mflc7" in namespace "gc-2634" Apr 17 14:00:15.459: INFO: Deleting pod "simpletest.rc-mfq64" in namespace "gc-2634" Apr 17 14:00:15.508: INFO: Deleting pod "simpletest.rc-mh46k" in namespace "gc-2634" Apr 17 14:00:15.569: INFO: Deleting pod "simpletest.rc-mkjd7" in namespace "gc-2634" Apr 17 14:00:15.615: INFO: Deleting pod "simpletest.rc-mldrm" in namespace "gc-2634" Apr 17 14:00:15.639: INFO: Deleting pod "simpletest.rc-mpvlw" in namespace "gc-2634" Apr 17 14:00:15.692: INFO: Deleting pod "simpletest.rc-n52rs" in namespace "gc-2634" Apr 17 14:00:15.722: INFO: Deleting pod "simpletest.rc-n896x" in namespace "gc-2634" Apr 17 14:00:15.734: INFO: Deleting pod "simpletest.rc-njqm7" in namespace "gc-2634" Apr 17 14:00:15.763: INFO: Deleting pod "simpletest.rc-nz7ln" in namespace "gc-2634" Apr 17 14:00:15.788: INFO: Deleting pod "simpletest.rc-pcfgr" in namespace "gc-2634" Apr 17 14:00:15.812: INFO: Deleting pod "simpletest.rc-pfddg" in namespace "gc-2634" Apr 17 14:00:15.821: INFO: Deleting pod "simpletest.rc-pnp5g" in namespace "gc-2634" Apr 17 14:00:15.879: INFO: Deleting pod "simpletest.rc-prgss" in namespace "gc-2634" Apr 17 14:00:15.891: INFO: Deleting pod "simpletest.rc-pwt5k" in namespace "gc-2634" Apr 17 14:00:15.919: INFO: Deleting pod "simpletest.rc-q2j67" in namespace "gc-2634" Apr 17 14:00:15.952: INFO: Deleting pod "simpletest.rc-qgdrh" in namespace "gc-2634" Apr 17 14:00:15.984: INFO: Deleting pod "simpletest.rc-qrgk7" in namespace "gc-2634" Apr 17 14:00:16.015: INFO: Deleting pod "simpletest.rc-qrjf7" in namespace "gc-2634" Apr 17 14:00:16.039: INFO: Deleting pod "simpletest.rc-r2dwm" in namespace "gc-2634" Apr 17 14:00:16.097: INFO: Deleting pod "simpletest.rc-r8599" in namespace "gc-2634" Apr 17 14:00:16.129: INFO: Deleting pod "simpletest.rc-rdj8k" in namespace "gc-2634" Apr 17 14:00:16.161: INFO: Deleting pod "simpletest.rc-rjclb" in namespace "gc-2634" Apr 17 14:00:16.181: INFO: Deleting pod "simpletest.rc-rmb2k" in namespace "gc-2634" Apr 17 14:00:16.218: INFO: Deleting pod "simpletest.rc-rv2bd" in namespace "gc-2634" Apr 17 14:00:16.233: INFO: Deleting pod "simpletest.rc-shn9p" in namespace "gc-2634" Apr 17 14:00:16.318: INFO: Deleting pod "simpletest.rc-t5qlt" in namespace "gc-2634" Apr 17 14:00:16.356: INFO: Deleting pod "simpletest.rc-t68zc" in namespace "gc-2634" Apr 17 14:00:16.374: INFO: Deleting pod "simpletest.rc-t9mlw" in namespace "gc-2634" Apr 17 14:00:16.394: INFO: Deleting pod "simpletest.rc-tdtt6" in namespace "gc-2634" Apr 17 14:00:16.428: INFO: Deleting pod "simpletest.rc-tsjmz" in namespace "gc-2634" Apr 17 14:00:16.446: INFO: Deleting pod "simpletest.rc-v8v2m" in namespace "gc-2634" Apr 17 14:00:16.484: INFO: Deleting pod "simpletest.rc-vn9nl" in namespace "gc-2634" Apr 17 14:00:16.521: INFO: Deleting pod "simpletest.rc-vskkj" in namespace "gc-2634" Apr 17 14:00:16.553: INFO: Deleting pod "simpletest.rc-vtx8w" in namespace "gc-2634" Apr 17 14:00:16.606: INFO: Deleting pod "simpletest.rc-vwvdv" in namespace "gc-2634" Apr 17 14:00:16.653: INFO: Deleting pod "simpletest.rc-wp2hv" in namespace "gc-2634" Apr 17 14:00:16.683: INFO: Deleting pod "simpletest.rc-wtslj" in namespace "gc-2634" Apr 17 14:00:16.741: INFO: Deleting pod "simpletest.rc-x6brr" in namespace "gc-2634" Apr 17 14:00:16.830: INFO: Deleting pod "simpletest.rc-x8ktf" in namespace "gc-2634" Apr 17 14:00:16.888: INFO: Deleting pod "simpletest.rc-xh6xz" in namespace "gc-2634" Apr 17 14:00:16.910: INFO: Deleting pod "simpletest.rc-z26m2" in namespace "gc-2634" Apr 17 14:00:16.946: INFO: Deleting pod "simpletest.rc-zdm8v" in namespace "gc-2634" Apr 17 14:00:16.964: INFO: Deleting pod "simpletest.rc-zg6lx" in namespace "gc-2634" Apr 17 14:00:17.015: INFO: Deleting pod "simpletest.rc-zswsf" in namespace "gc-2634" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 14:00:17.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-2634" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":55,"skipped":1246,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 14:00:17.117: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 14:00:17.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 17 14:00:17.208: INFO: The status of Pod pod-logs-websocket-4b981421-64a7-40b9-b3cc-7fde0265ea3b is Pending, waiting for it to be Running (with Ready = true) Apr 17 14:00:19.212: INFO: The status of Pod pod-logs-websocket-4b981421-64a7-40b9-b3cc-7fde0265ea3b is Pending, waiting for it to be Running (with Ready = true) Apr 17 14:00:21.213: INFO: The status of Pod pod-logs-websocket-4b981421-64a7-40b9-b3cc-7fde0265ea3b is Pending, waiting for it to be Running (with Ready = true) Apr 17 14:00:23.212: INFO: The status of Pod pod-logs-websocket-4b981421-64a7-40b9-b3cc-7fde0265ea3b is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 14:00:23.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-2053" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1248,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 14:00:23.283: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-tp88 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 17 14:00:23.332: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-tp88" in namespace "subpath-1874" to be "Succeeded or Failed" Apr 17 14:00:23.337: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Pending", Reason="", readiness=false. Elapsed: 5.184184ms Apr 17 14:00:25.341: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 2.008899352s Apr 17 14:00:27.345: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 4.013199054s Apr 17 14:00:29.350: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 6.017689884s Apr 17 14:00:31.353: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 8.021136278s Apr 17 14:00:33.358: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 10.025851546s Apr 17 14:00:35.362: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 12.029694302s Apr 17 14:00:37.366: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 14.033995906s Apr 17 14:00:39.371: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 16.038330068s Apr 17 14:00:41.374: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 18.042198876s Apr 17 14:00:43.380: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Running", Reason="", readiness=true. Elapsed: 20.047275274s Apr 17 14:00:45.384: INFO: Pod "pod-subpath-test-downwardapi-tp88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.05180591s �[1mSTEP�[0m: Saw pod success Apr 17 14:00:45.384: INFO: Pod "pod-subpath-test-downwardapi-tp88" satisfied condition "Succeeded or Failed" Apr 17 14:00:45.387: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-subpath-test-downwardapi-tp88 container test-container-subpath-downwardapi-tp88: <nil> �[1mSTEP�[0m: delete the pod Apr 17 14:00:45.402: INFO: Waiting for pod pod-subpath-test-downwardapi-tp88 to disappear Apr 17 14:00:45.405: INFO: Pod pod-subpath-test-downwardapi-tp88 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-tp88 Apr 17 14:00:45.405: INFO: Deleting pod "pod-subpath-test-downwardapi-tp88" in namespace "subpath-1874" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 14:00:45.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-1874" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":57,"skipped":1278,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 14:00:45.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 17 14:00:45.449: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 14:00:48.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3899" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":58,"skipped":1281,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 14:00:48.604: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-5db9c9f5-47ce-4742-a52b-e5d8df54f299 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 17 14:00:48.640: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7aba6bc5-ef34-4b4f-b672-749e6e7a818d" in namespace "projected-8299" to be "Succeeded or Failed" Apr 17 14:00:48.643: INFO: Pod "pod-projected-secrets-7aba6bc5-ef34-4b4f-b672-749e6e7a818d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550923ms Apr 17 14:00:50.647: INFO: Pod "pod-projected-secrets-7aba6bc5-ef34-4b4f-b672-749e6e7a818d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00655046s �[1mSTEP�[0m: Saw pod success Apr 17 14:00:50.647: INFO: Pod "pod-projected-secrets-7aba6bc5-ef34-4b4f-b672-749e6e7a818d" satisfied condition "Succeeded or Failed" Apr 17 14:00:50.650: INFO: Trying to get logs from node k8s-upgrade-and-conformance-4exvhp-md-0-7b94d55997-k6cck pod pod-projected-secrets-7aba6bc5-ef34-4b4f-b672-749e6e7a818d container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 17 14:00:50.663: INFO: Waiting for pod pod-projected-secrets-7aba6bc5-ef34-4b4f-b672-749e6e7a818d to disappear Apr 17 14:00:50.666: INFO: Pod pod-projected-secrets-7aba6bc5-ef34-4b4f-b672-749e6e7a818d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 17 14:00:50.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8299" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1306,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 17 14:00:50.690: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go: