Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h1m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc0005cc9a8>: { error: <*errors.withMessage | 0xc0009a6280>{ cause: <*errors.errorString | 0xc0010768e0>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1a98018, 0x1adc429, 0x7b9731, 0x7b9125, 0x7b87fb, 0x7be569, 0x7bdf52, 0x7df031, 0x7ded56, 0x7de3a5, 0x7e07e5, 0x7ec9c9, 0x7ec7de, 0x1af7d32, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-w8yns6 INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-w8yns6" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-upqhfa" using the "upgrades-cgroupfs" template (Kubernetes v1.22.17, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-upqhfa --infrastructure (default) --kubernetes-version v1.22.17 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-upqhfa-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-upqhfa-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-upqhfa-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-upqhfa-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-upqhfa created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-upqhfa-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-upqhfa-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-w8yns6/k8s-upgrade-and-conformance-upqhfa-bk7tk to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-w8yns6/k8s-upgrade-and-conformance-upqhfa-bk7tk to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.15 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-w8yns6/k8s-upgrade-and-conformance-upqhfa-md-0-6prb7 to be upgraded to v1.23.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.15 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-w8yns6/k8s-upgrade-and-conformance-upqhfa-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-w8yns6/k8s-upgrade-and-conformance-upqhfa-mp-0 to be upgraded from v1.22.17 to v1.23.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.15 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1672584892�[0m - Will randomize all specs Will run �[1m7052�[0m specs Running in parallel across �[1m4�[0m nodes Jan 1 14:54:57.958: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:54:57.962: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 1 14:54:57.981: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 1 14:54:58.025: INFO: The status of Pod coredns-bd6b6df9f-fc6sd is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 1 14:54:58.025: INFO: The status of Pod coredns-bd6b6df9f-r2ph9 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 1 14:54:58.025: INFO: 14 / 16 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 1 14:54:58.025: INFO: expected 2 pod replicas in namespace 'kube-system', 0 are Running and Ready. Jan 1 14:54:58.025: INFO: POD NODE PHASE GRACE CONDITIONS Jan 1 14:54:58.025: INFO: coredns-bd6b6df9f-fc6sd k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC }] Jan 1 14:54:58.025: INFO: coredns-bd6b6df9f-r2ph9 k8s-upgrade-and-conformance-upqhfa-worker-9emfga Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:54:57 +0000 UTC }] Jan 1 14:54:58.025: INFO: Jan 1 14:55:00.108: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (2 seconds elapsed) Jan 1 14:55:00.108: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 1 14:55:00.108: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 1 14:55:00.123: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 1 14:55:00.124: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 1 14:55:00.124: INFO: e2e test version: v1.23.15 Jan 1 14:55:00.125: INFO: kube-apiserver version: v1.23.15 Jan 1 14:55:00.127: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:00.138: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 1 14:55:00.131: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:00.166: INFO: Cluster IP family: ipv4 �[36mS�[0m �[90m------------------------------�[0m Jan 1 14:55:00.140: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:00.168: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 1 14:55:00.140: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:00.171: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:00.378: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets W0101 14:55:00.411920 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 1 14:55:00.412: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating secret secrets-2522/secret-test-2f3d9a07-7c13-4669-95ae-f10803e5089d �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 1 14:55:00.445: INFO: Waiting up to 5m0s for pod "pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f" in namespace "secrets-2522" to be "Succeeded or Failed" Jan 1 14:55:00.452: INFO: Pod "pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.320022ms Jan 1 14:55:02.463: INFO: Pod "pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017416285s Jan 1 14:55:04.471: INFO: Pod "pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025922114s Jan 1 14:55:06.479: INFO: Pod "pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033922172s �[1mSTEP�[0m: Saw pod success Jan 1 14:55:06.479: INFO: Pod "pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f" satisfied condition "Succeeded or Failed" Jan 1 14:55:06.485: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:55:06.531: INFO: Waiting for pod pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f to disappear Jan 1 14:55:06.535: INFO: Pod pod-configmaps-08b4ac5e-bc1e-40a3-bd0d-b37d52b8b81f no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:06.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2522" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":58,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:00.276: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api W0101 14:55:00.326667 15 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 1 14:55:00.326: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 14:55:00.365: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd" in namespace "downward-api-4586" to be "Succeeded or Failed" Jan 1 14:55:00.377: INFO: Pod "downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 12.801818ms Jan 1 14:55:02.392: INFO: Pod "downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027287142s Jan 1 14:55:04.398: INFO: Pod "downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03342443s Jan 1 14:55:06.406: INFO: Pod "downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041745292s Jan 1 14:55:08.413: INFO: Pod "downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047898736s �[1mSTEP�[0m: Saw pod success Jan 1 14:55:08.413: INFO: Pod "downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd" satisfied condition "Succeeded or Failed" Jan 1 14:55:08.417: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-zwqnic pod downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:55:08.478: INFO: Waiting for pod downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd to disappear Jan 1 14:55:08.485: INFO: Pod downwardapi-volume-f8a502b4-f250-4ae0-a27d-4d2f7c919cdd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:08.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4586" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":24,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:00.244: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook W0101 14:55:00.315329 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 1 14:55:00.315: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 1 14:55:00.380: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:02.390: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:04.387: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:06.388: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 1 14:55:06.404: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:08.410: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 1 14:55:08.431: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 14:55:08.439: INFO: Pod pod-with-prestop-http-hook still exists Jan 1 14:55:10.440: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 14:55:10.445: INFO: Pod pod-with-prestop-http-hook still exists Jan 1 14:55:12.440: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 1 14:55:12.445: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:12.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-485" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":13,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:08.543: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 1 14:55:13.684: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:13.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-3919" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":38,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:13.774: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 1 14:55:13.835: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 1 14:55:13.843: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 1 14:55:13.878: INFO: waiting for watch events with expected annotations Jan 1 14:55:13.878: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:13.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-122" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":3,"skipped":54,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:12.605: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating pod Jan 1 14:55:12.654: INFO: The status of Pod pod-hostip-9a1a8974-4c67-43c7-a642-7fce92cd8966 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:14.666: INFO: The status of Pod pod-hostip-9a1a8974-4c67-43c7-a642-7fce92cd8966 is Running (Ready = true) Jan 1 14:55:14.678: INFO: Pod pod-hostip-9a1a8974-4c67-43c7-a642-7fce92cd8966 has hostIP: 172.18.0.6 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:14.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-6926" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":57,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:06.591: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 1 14:55:06.641: INFO: Waiting up to 5m0s for pod "pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986" in namespace "emptydir-1050" to be "Succeeded or Failed" Jan 1 14:55:06.644: INFO: Pod "pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025429ms Jan 1 14:55:08.653: INFO: Pod "pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011297767s Jan 1 14:55:10.660: INFO: Pod "pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019143858s Jan 1 14:55:12.671: INFO: Pod "pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029372703s Jan 1 14:55:14.676: INFO: Pod "pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034597096s �[1mSTEP�[0m: Saw pod success Jan 1 14:55:14.676: INFO: Pod "pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986" satisfied condition "Succeeded or Failed" Jan 1 14:55:14.685: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:55:14.729: INFO: Waiting for pod pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986 to disappear Jan 1 14:55:14.732: INFO: Pod pod-7e1bc144-b1f5-4ca4-b55f-ee7c56cda986 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:14.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1050" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":74,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:14.783: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:55:14.824: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Jan 1 14:55:16.900: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:17.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-6656" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":3,"skipped":81,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:14.131: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Jan 1 14:55:14.160: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:19.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-5522" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":4,"skipped":97,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:17.989: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 1 14:55:29.377: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-upqhfa-bk7tk-vbnvt is Running (Ready = true) Jan 1 14:55:29.692: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:29.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1483" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":4,"skipped":89,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:00.260: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi W0101 14:55:00.315671 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 1 14:55:00.315: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 1 14:55:00.338: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:05.850: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:33.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3955" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":1,"skipped":32,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:14.739: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:55:14.785: INFO: Creating simple deployment test-new-deployment Jan 1 14:55:14.821: INFO: deployment "test-new-deployment" doesn't have the required revision set Jan 1 14:55:16.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:18.859: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:20.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:23.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:25.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:26.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:28.900: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:30.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:32.861: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 14, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the deployment Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 1 14:55:34.988: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-5745 ea4225aa-9d02-41e1-87c5-324037aa9666 3084 3 2023-01-01 14:55:14 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-01-01 14:55:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0030ed118 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-01 14:55:34 +0000 UTC,LastTransitionTime:2023-01-01 14:55:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2023-01-01 14:55:34 +0000 UTC,LastTransitionTime:2023-01-01 14:55:14 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 1 14:55:35.030: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-5745 d91e5017-f3e4-43b7-abf1-2f58fa4246e9 3092 2 2023-01-01 14:55:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment ea4225aa-9d02-41e1-87c5-324037aa9666 0xc002a5bec0 0xc002a5bec1}] [] [{kube-controller-manager Update apps/v1 2023-01-01 14:55:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea4225aa-9d02-41e1-87c5-324037aa9666\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:55:34 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a5bf48 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 1 14:55:35.084: INFO: Pod "test-new-deployment-5d9fdcc779-48bgd" is not available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-48bgd test-new-deployment-5d9fdcc779- deployment-5745 79633e60-d816-4019-a521-a2ef38796689 3088 0 2023-01-01 14:55:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 d91e5017-f3e4-43b7-abf1-2f58fa4246e9 0xc00330a2c0 0xc00330a2c1}] [] [{kube-controller-manager Update v1 2023-01-01 14:55:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d91e5017-f3e4-43b7-abf1-2f58fa4246e9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2cl7q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2cl7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-upqhfa-worker-zwqnic,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:55:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 1 14:55:35.085: INFO: Pod "test-new-deployment-5d9fdcc779-fgtkr" is available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-fgtkr test-new-deployment-5d9fdcc779- deployment-5745 14653230-5bf0-484f-9dcd-66f7aacb801e 3059 0 2023-01-01 14:55:14 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 d91e5017-f3e4-43b7-abf1-2f58fa4246e9 0xc00330a410 0xc00330a411}] [] [{kube-controller-manager Update v1 2023-01-01 14:55:14 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d91e5017-f3e4-43b7-abf1-2f58fa4246e9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-01 14:55:34 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.5\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vgdnw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgdnw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:55:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:55:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:55:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:55:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.5,StartTime:2023-01-01 14:55:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-01 14:55:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://b3cbc43bf70135943d970bb2be937b238c1fd99c9d71bb9c9481972c51b64081,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:35.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-5745" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":3,"skipped":64,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:29.889: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:41.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1994" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":5,"skipped":90,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:41.411: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-6e6bac38-a5cd-47a2-8c4c-d2a92a79702c [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:41.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-3704" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":6,"skipped":118,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:41.801: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:55:41.889: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:43.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-2275" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":7,"skipped":120,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:35.303: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename e2e-kubelet-etc-hosts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting up the test �[1mSTEP�[0m: Creating hostNetwork=false pod Jan 1 14:55:35.508: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:37.524: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:39.523: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:41.562: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:43.528: INFO: The status of Pod test-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:45.546: INFO: The status of Pod test-pod is Running (Ready = true) �[1mSTEP�[0m: Creating hostNetwork=true pod Jan 1 14:55:45.590: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:47.633: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:49.598: INFO: The status of Pod test-host-network-pod is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:55:51.597: INFO: The status of Pod test-host-network-pod is Running (Ready = true) �[1mSTEP�[0m: Running the test �[1mSTEP�[0m: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jan 1 14:55:51.601: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:51.601: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:51.603: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:51.603: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:51.761: INFO: Exec stderr: "" Jan 1 14:55:51.761: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:51.761: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:51.763: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:51.764: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:51.902: INFO: Exec stderr: "" Jan 1 14:55:51.902: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:51.902: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:51.903: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:51.903: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:52.046: INFO: Exec stderr: "" Jan 1 14:55:52.046: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:52.047: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:52.048: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:52.048: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:52.178: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jan 1 14:55:52.178: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:52.178: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:52.180: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:52.180: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:52.342: INFO: Exec stderr: "" Jan 1 14:55:52.342: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-pod ContainerName:busybox-3 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:52.342: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:52.343: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:52.343: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-3&container=busybox-3&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:52.491: INFO: Exec stderr: "" �[1mSTEP�[0m: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jan 1 14:55:52.491: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:52.491: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:52.492: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:52.492: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:52.647: INFO: Exec stderr: "" Jan 1 14:55:52.647: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-host-network-pod ContainerName:busybox-1 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:52.647: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:52.648: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:52.648: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-1&container=busybox-1&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:52.783: INFO: Exec stderr: "" Jan 1 14:55:52.783: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:52.783: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:52.784: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:52.785: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:52.901: INFO: Exec stderr: "" Jan 1 14:55:52.901: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3330 PodName:test-host-network-pod ContainerName:busybox-2 Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:55:52.902: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:55:52.903: INFO: ExecWithOptions: Clientset creation Jan 1 14:55:52.903: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/e2e-kubelet-etc-hosts-3330/pods/test-host-network-pod/exec?command=cat&command=%2Fetc%2Fhosts-original&container=busybox-2&container=busybox-2&stderr=true&stdout=true %!s(MISSING)) Jan 1 14:55:53.047: INFO: Exec stderr: "" [AfterEach] [sig-node] KubeletManagedEtcHosts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:53.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "e2e-kubelet-etc-hosts-3330" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":77,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:53.069: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 1 14:55:54.237: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 14:55:54.265: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 14:55:57.313: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:57.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4944" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4944-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":5,"skipped":78,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:57.784: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of events Jan 1 14:55:57.884: INFO: created test-event-1 Jan 1 14:55:57.896: INFO: created test-event-2 Jan 1 14:55:57.908: INFO: created test-event-3 �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete collection of events Jan 1 14:55:57.919: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity Jan 1 14:55:57.953: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:55:57.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-2179" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":6,"skipped":85,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:43.145: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-3068 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-3068 I0101 14:55:43.648556 19 runners.go:193] Created replication controller with name: externalname-service, namespace: services-3068, replica count: 2 I0101 14:55:46.700010 19 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 14:55:49.701218 19 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 14:55:49.701: INFO: Creating new exec pod Jan 1 14:55:56.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3068 exec execpod642w2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 1 14:55:57.315: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 1 14:55:57.315: INFO: stdout: "externalname-service-h8s4p" Jan 1 14:55:57.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3068 exec execpod642w2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.220.202 80' Jan 1 14:55:57.855: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.143.220.202 80\nConnection to 10.143.220.202 80 port [tcp/http] succeeded!\n" Jan 1 14:55:57.855: INFO: stdout: "" Jan 1 14:55:58.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3068 exec execpod642w2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.220.202 80' Jan 1 14:55:59.359: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.143.220.202 80\nConnection to 10.143.220.202 80 port [tcp/http] succeeded!\n" Jan 1 14:55:59.359: INFO: stdout: "externalname-service-6k8h9" Jan 1 14:55:59.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3068 exec execpod642w2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31414' Jan 1 14:55:59.796: INFO: stderr: "+ nc -v -t -w 2 172.18.0.5 31414\n+ echo hostName\nConnection to 172.18.0.5 31414 port [tcp/*] succeeded!\n" Jan 1 14:55:59.796: INFO: stdout: "externalname-service-h8s4p" Jan 1 14:55:59.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3068 exec execpod642w2 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31414' Jan 1 14:56:00.182: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 31414\nConnection to 172.18.0.7 31414 port [tcp/*] succeeded!\n" Jan 1 14:56:00.182: INFO: stdout: "externalname-service-h8s4p" Jan 1 14:56:00.182: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:00.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3068" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":8,"skipped":138,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:58.109: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-adf2e46d-7925-4098-977f-a32962bd36b3 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 1 14:55:58.179: INFO: Waiting up to 5m0s for pod "pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e" in namespace "secrets-1240" to be "Succeeded or Failed" Jan 1 14:55:58.188: INFO: Pod "pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.081585ms Jan 1 14:56:00.200: INFO: Pod "pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021301717s Jan 1 14:56:02.272: INFO: Pod "pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092713801s Jan 1 14:56:04.324: INFO: Pod "pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145109822s �[1mSTEP�[0m: Saw pod success Jan 1 14:56:04.324: INFO: Pod "pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e" satisfied condition "Succeeded or Failed" Jan 1 14:56:04.523: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb pod pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:56:04.883: INFO: Waiting for pod pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e to disappear Jan 1 14:56:04.928: INFO: Pod pod-secrets-103e4062-ef43-4451-892f-a2e246c7750e no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:04.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1240" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":131,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:00.310: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Updating PodDisruptionBudget status �[1mSTEP�[0m: Waiting for all pods to be running Jan 1 14:56:02.746: INFO: running pods: 0 < 1 Jan 1 14:56:04.783: INFO: running pods: 0 < 1 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Patching PodDisruptionBudget status �[1mSTEP�[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:07.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-6381" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":9,"skipped":146,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:05.047: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:56:05.317: INFO: Creating deployment "test-recreate-deployment" Jan 1 14:56:05.405: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 1 14:56:05.532: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 1 14:56:07.748: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 1 14:56:07.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 56, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 56, 5, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 56, 5, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 56, 5, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-594f666cd9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:56:09.925: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 1 14:56:09.977: INFO: Updating deployment test-recreate-deployment Jan 1 14:56:09.977: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 1 14:56:11.204: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3838 b90ea3d4-6678-41e9-9ce1-c46fdea4097a 4251 2 2023-01-01 14:56:05 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-01 14:56:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:56:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00083f438 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-01 14:56:11 +0000 UTC,LastTransitionTime:2023-01-01 14:56:11 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5b99bd5487" is progressing.,LastUpdateTime:2023-01-01 14:56:11 +0000 UTC,LastTransitionTime:2023-01-01 14:56:05 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Jan 1 14:56:11.268: INFO: New ReplicaSet "test-recreate-deployment-5b99bd5487" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5b99bd5487 deployment-3838 83e37489-e5c7-4003-8236-5b641b306638 4247 1 2023-01-01 14:56:10 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b90ea3d4-6678-41e9-9ce1-c46fdea4097a 0xc004594e07 0xc004594e08}] [] [{kube-controller-manager Update apps/v1 2023-01-01 14:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b90ea3d4-6678-41e9-9ce1-c46fdea4097a\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:56:10 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5b99bd5487,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004594ea8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 1 14:56:11.268: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 1 14:56:11.268: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-594f666cd9 deployment-3838 c0a88c5c-188b-4034-ac8a-a198a7c2ab57 4230 2 2023-01-01 14:56:05 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:594f666cd9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b90ea3d4-6678-41e9-9ce1-c46fdea4097a 0xc004594cf7 0xc004594cf8}] [] [{kube-controller-manager Update apps/v1 2023-01-01 14:56:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b90ea3d4-6678-41e9-9ce1-c46fdea4097a\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:56:10 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 594f666cd9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:594f666cd9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004594da8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 1 14:56:11.317: INFO: Pod "test-recreate-deployment-5b99bd5487-qpk9h" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-qpk9h test-recreate-deployment-5b99bd5487- deployment-3838 6bd9bf0e-0836-45ba-8bef-efdfe0b97bef 4248 0 2023-01-01 14:56:10 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 83e37489-e5c7-4003-8236-5b641b306638 0xc004578587 0xc004578588}] [] [{kube-controller-manager Update v1 2023-01-01 14:56:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"83e37489-e5c7-4003-8236-5b641b306638\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-01 14:56:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cmwz5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cmwz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-upqhfa-worker-zwqnic,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:56:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:56:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:56:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:56:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-01 14:56:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:11.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3838" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":8,"skipped":133,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:20.303: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Jan 1 14:56:00.776: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-upqhfa-bk7tk-vbnvt is Running (Ready = true) Jan 1 14:56:00.926: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 1 14:56:00.926: INFO: Deleting pod "simpletest.rc-25w2s" in namespace "gc-4217" Jan 1 14:56:00.951: INFO: Deleting pod "simpletest.rc-2dlqr" in namespace "gc-4217" Jan 1 14:56:00.979: INFO: Deleting pod "simpletest.rc-2k7xx" in namespace "gc-4217" Jan 1 14:56:01.003: INFO: Deleting pod "simpletest.rc-2z4km" in namespace "gc-4217" Jan 1 14:56:01.027: INFO: Deleting pod "simpletest.rc-44lhw" in namespace "gc-4217" Jan 1 14:56:01.055: INFO: Deleting pod "simpletest.rc-4bvmn" in namespace "gc-4217" Jan 1 14:56:01.098: INFO: Deleting pod "simpletest.rc-4l4s5" in namespace "gc-4217" Jan 1 14:56:01.141: INFO: Deleting pod "simpletest.rc-4x6zq" in namespace "gc-4217" Jan 1 14:56:01.189: INFO: Deleting pod "simpletest.rc-55n8f" in namespace "gc-4217" Jan 1 14:56:01.226: INFO: Deleting pod "simpletest.rc-5dfg5" in namespace "gc-4217" Jan 1 14:56:01.255: INFO: Deleting pod "simpletest.rc-5hzpr" in namespace "gc-4217" Jan 1 14:56:01.301: INFO: Deleting pod "simpletest.rc-62s9n" in namespace "gc-4217" Jan 1 14:56:01.376: INFO: Deleting pod "simpletest.rc-68wxx" in namespace "gc-4217" Jan 1 14:56:01.460: INFO: Deleting pod "simpletest.rc-69b79" in namespace "gc-4217" Jan 1 14:56:01.485: INFO: Deleting pod "simpletest.rc-6j2jd" in namespace "gc-4217" Jan 1 14:56:01.517: INFO: Deleting pod "simpletest.rc-6l7mv" in namespace "gc-4217" Jan 1 14:56:01.623: INFO: Deleting pod "simpletest.rc-6l9wl" in namespace "gc-4217" Jan 1 14:56:01.675: INFO: Deleting pod "simpletest.rc-7c8zc" in namespace "gc-4217" Jan 1 14:56:01.721: INFO: Deleting pod "simpletest.rc-7gxnb" in namespace "gc-4217" Jan 1 14:56:01.924: INFO: Deleting pod "simpletest.rc-7sp95" in namespace "gc-4217" Jan 1 14:56:02.038: INFO: Deleting pod "simpletest.rc-8j9m4" in namespace "gc-4217" Jan 1 14:56:02.083: INFO: Deleting pod "simpletest.rc-8txjv" in namespace "gc-4217" Jan 1 14:56:02.179: INFO: Deleting pod "simpletest.rc-8wv7d" in namespace "gc-4217" Jan 1 14:56:02.330: INFO: Deleting pod "simpletest.rc-978sh" in namespace "gc-4217" Jan 1 14:56:02.369: INFO: Deleting pod "simpletest.rc-98p9j" in namespace "gc-4217" Jan 1 14:56:02.457: INFO: Deleting pod "simpletest.rc-9btt8" in namespace "gc-4217" Jan 1 14:56:02.526: INFO: Deleting pod "simpletest.rc-9gkr6" in namespace "gc-4217" Jan 1 14:56:02.805: INFO: Deleting pod "simpletest.rc-9lv59" in namespace "gc-4217" Jan 1 14:56:02.904: INFO: Deleting pod "simpletest.rc-9nc2l" in namespace "gc-4217" Jan 1 14:56:03.193: INFO: Deleting pod "simpletest.rc-9xcj7" in namespace "gc-4217" Jan 1 14:56:03.484: INFO: Deleting pod "simpletest.rc-9xkdc" in namespace "gc-4217" Jan 1 14:56:03.775: INFO: Deleting pod "simpletest.rc-b22wz" in namespace "gc-4217" Jan 1 14:56:03.994: INFO: Deleting pod "simpletest.rc-b54w9" in namespace "gc-4217" Jan 1 14:56:04.215: INFO: Deleting pod "simpletest.rc-b6npj" in namespace "gc-4217" Jan 1 14:56:04.486: INFO: Deleting pod "simpletest.rc-bmkrx" in namespace "gc-4217" Jan 1 14:56:04.705: INFO: Deleting pod "simpletest.rc-bvf76" in namespace "gc-4217" Jan 1 14:56:04.826: INFO: Deleting pod "simpletest.rc-cfflm" in namespace "gc-4217" Jan 1 14:56:04.981: INFO: Deleting pod "simpletest.rc-cfllp" in namespace "gc-4217" Jan 1 14:56:05.193: INFO: Deleting pod "simpletest.rc-cqwrf" in namespace "gc-4217" Jan 1 14:56:05.354: INFO: Deleting pod "simpletest.rc-ctdnk" in namespace "gc-4217" Jan 1 14:56:05.491: INFO: Deleting pod "simpletest.rc-df7gj" in namespace "gc-4217" Jan 1 14:56:05.543: INFO: Deleting pod "simpletest.rc-dqzjt" in namespace "gc-4217" Jan 1 14:56:05.670: INFO: Deleting pod "simpletest.rc-f2wxg" in namespace "gc-4217" Jan 1 14:56:05.859: INFO: Deleting pod "simpletest.rc-f9r5l" in namespace "gc-4217" Jan 1 14:56:06.027: INFO: Deleting pod "simpletest.rc-fbgpq" in namespace "gc-4217" Jan 1 14:56:06.129: INFO: Deleting pod "simpletest.rc-ffttd" in namespace "gc-4217" Jan 1 14:56:06.193: INFO: Deleting pod "simpletest.rc-ftnkf" in namespace "gc-4217" Jan 1 14:56:06.343: INFO: Deleting pod "simpletest.rc-h7rl4" in namespace "gc-4217" Jan 1 14:56:06.533: INFO: Deleting pod "simpletest.rc-h9776" in namespace "gc-4217" Jan 1 14:56:06.617: INFO: Deleting pod "simpletest.rc-hcmj5" in namespace "gc-4217" Jan 1 14:56:06.801: INFO: Deleting pod "simpletest.rc-hfcl8" in namespace "gc-4217" Jan 1 14:56:07.006: INFO: Deleting pod "simpletest.rc-hhdh6" in namespace "gc-4217" Jan 1 14:56:07.196: INFO: Deleting pod "simpletest.rc-j7nj7" in namespace "gc-4217" Jan 1 14:56:07.272: INFO: Deleting pod "simpletest.rc-jbhhb" in namespace "gc-4217" Jan 1 14:56:07.556: INFO: Deleting pod "simpletest.rc-k5kln" in namespace "gc-4217" Jan 1 14:56:07.747: INFO: Deleting pod "simpletest.rc-k6jkm" in namespace "gc-4217" Jan 1 14:56:07.874: INFO: Deleting pod "simpletest.rc-kmb76" in namespace "gc-4217" Jan 1 14:56:08.023: INFO: Deleting pod "simpletest.rc-kpnm9" in namespace "gc-4217" Jan 1 14:56:08.173: INFO: Deleting pod "simpletest.rc-kz6s9" in namespace "gc-4217" Jan 1 14:56:08.407: INFO: Deleting pod "simpletest.rc-kzn28" in namespace "gc-4217" Jan 1 14:56:08.482: INFO: Deleting pod "simpletest.rc-l4fdg" in namespace "gc-4217" Jan 1 14:56:08.586: INFO: Deleting pod "simpletest.rc-l8jg8" in namespace "gc-4217" Jan 1 14:56:08.645: INFO: Deleting pod "simpletest.rc-ls67k" in namespace "gc-4217" Jan 1 14:56:08.756: INFO: Deleting pod "simpletest.rc-lw6xn" in namespace "gc-4217" Jan 1 14:56:08.807: INFO: Deleting pod "simpletest.rc-lwdwh" in namespace "gc-4217" Jan 1 14:56:08.943: INFO: Deleting pod "simpletest.rc-lzp2n" in namespace "gc-4217" Jan 1 14:56:09.078: INFO: Deleting pod "simpletest.rc-mgq5x" in namespace "gc-4217" Jan 1 14:56:09.178: INFO: Deleting pod "simpletest.rc-mkw6r" in namespace "gc-4217" Jan 1 14:56:09.243: INFO: Deleting pod "simpletest.rc-n22fp" in namespace "gc-4217" Jan 1 14:56:09.292: INFO: Deleting pod "simpletest.rc-nqc4b" in namespace "gc-4217" Jan 1 14:56:09.378: INFO: Deleting pod "simpletest.rc-p6f8g" in namespace "gc-4217" Jan 1 14:56:09.457: INFO: Deleting pod "simpletest.rc-qrxjw" in namespace "gc-4217" Jan 1 14:56:09.533: INFO: Deleting pod "simpletest.rc-qsfc8" in namespace "gc-4217" Jan 1 14:56:09.638: INFO: Deleting pod "simpletest.rc-rhnmk" in namespace "gc-4217" Jan 1 14:56:09.704: INFO: Deleting pod "simpletest.rc-slj2l" in namespace "gc-4217" Jan 1 14:56:09.982: INFO: Deleting pod "simpletest.rc-sv5ld" in namespace "gc-4217" Jan 1 14:56:10.183: INFO: Deleting pod "simpletest.rc-t9d8g" in namespace "gc-4217" Jan 1 14:56:10.311: INFO: Deleting pod "simpletest.rc-tc668" in namespace "gc-4217" Jan 1 14:56:10.361: INFO: Deleting pod "simpletest.rc-tc6cf" in namespace "gc-4217" Jan 1 14:56:10.652: INFO: Deleting pod "simpletest.rc-tpbzx" in namespace "gc-4217" Jan 1 14:56:10.852: INFO: Deleting pod "simpletest.rc-tzxg6" in namespace "gc-4217" Jan 1 14:56:11.124: INFO: Deleting pod "simpletest.rc-w2krf" in namespace "gc-4217" Jan 1 14:56:11.239: INFO: Deleting pod "simpletest.rc-whqbg" in namespace "gc-4217" Jan 1 14:56:11.360: INFO: Deleting pod "simpletest.rc-wknnd" in namespace "gc-4217" Jan 1 14:56:11.394: INFO: Deleting pod "simpletest.rc-wpbrt" in namespace "gc-4217" Jan 1 14:56:11.522: INFO: Deleting pod "simpletest.rc-wt522" in namespace "gc-4217" Jan 1 14:56:11.697: INFO: Deleting pod "simpletest.rc-wvmwk" in namespace "gc-4217" Jan 1 14:56:11.909: INFO: Deleting pod "simpletest.rc-wwgzr" in namespace "gc-4217" Jan 1 14:56:12.150: INFO: Deleting pod "simpletest.rc-wzdw6" in namespace "gc-4217" Jan 1 14:56:12.247: INFO: Deleting pod "simpletest.rc-x57vr" in namespace "gc-4217" Jan 1 14:56:12.421: INFO: Deleting pod "simpletest.rc-x8wfd" in namespace "gc-4217" Jan 1 14:56:12.696: INFO: Deleting pod "simpletest.rc-x9wlx" in namespace "gc-4217" Jan 1 14:56:12.813: INFO: Deleting pod "simpletest.rc-xf2p9" in namespace "gc-4217" Jan 1 14:56:12.904: INFO: Deleting pod "simpletest.rc-xvlc5" in namespace "gc-4217" Jan 1 14:56:13.043: INFO: Deleting pod "simpletest.rc-xzjlj" in namespace "gc-4217" Jan 1 14:56:13.185: INFO: Deleting pod "simpletest.rc-z9zdh" in namespace "gc-4217" Jan 1 14:56:13.337: INFO: Deleting pod "simpletest.rc-zjb97" in namespace "gc-4217" Jan 1 14:56:13.477: INFO: Deleting pod "simpletest.rc-zmpcl" in namespace "gc-4217" Jan 1 14:56:13.674: INFO: Deleting pod "simpletest.rc-zsw9c" in namespace "gc-4217" Jan 1 14:56:13.718: INFO: Deleting pod "simpletest.rc-ztrnm" in namespace "gc-4217" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:13.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4217" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":5,"skipped":188,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:07.863: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-aef51b49-7588-4bae-abbd-123a35cc0364 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:14.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3231" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":163,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:14.237: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:15.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-9040" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":6,"skipped":189,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:11.572: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Jan 1 14:56:12.169: INFO: observed Pod pod-test in namespace pods-7843 in phase Pending with labels: map[test-pod-static:true] & conditions [] Jan 1 14:56:12.207: INFO: observed Pod pod-test in namespace pods-7843 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:12 +0000 UTC }] Jan 1 14:56:12.368: INFO: observed Pod pod-test in namespace pods-7843 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:12 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:12 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:12 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:12 +0000 UTC }] Jan 1 14:56:15.840: INFO: Found Pod pod-test in namespace pods-7843 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-01 14:56:12 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Jan 1 14:56:15.878: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Jan 1 14:56:15.928: INFO: observed event type ADDED Jan 1 14:56:15.928: INFO: observed event type MODIFIED Jan 1 14:56:15.928: INFO: observed event type MODIFIED Jan 1 14:56:15.929: INFO: observed event type MODIFIED Jan 1 14:56:15.929: INFO: observed event type MODIFIED Jan 1 14:56:15.929: INFO: observed event type MODIFIED Jan 1 14:56:15.929: INFO: observed event type MODIFIED Jan 1 14:56:17.905: INFO: observed event type MODIFIED Jan 1 14:56:18.877: INFO: observed event type MODIFIED Jan 1 14:56:18.888: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:18.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7843" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":9,"skipped":149,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:14.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 1 14:56:14.929: INFO: Waiting up to 5m0s for pod "pod-70e005e3-2a41-433d-9186-86b395f7c5f4" in namespace "emptydir-2315" to be "Succeeded or Failed" Jan 1 14:56:14.959: INFO: Pod "pod-70e005e3-2a41-433d-9186-86b395f7c5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 29.321301ms Jan 1 14:56:16.971: INFO: Pod "pod-70e005e3-2a41-433d-9186-86b395f7c5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041241859s Jan 1 14:56:18.977: INFO: Pod "pod-70e005e3-2a41-433d-9186-86b395f7c5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047797189s �[1mSTEP�[0m: Saw pod success Jan 1 14:56:18.977: INFO: Pod "pod-70e005e3-2a41-433d-9186-86b395f7c5f4" satisfied condition "Succeeded or Failed" Jan 1 14:56:18.984: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-70e005e3-2a41-433d-9186-86b395f7c5f4 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:56:19.015: INFO: Waiting for pod pod-70e005e3-2a41-433d-9186-86b395f7c5f4 to disappear Jan 1 14:56:19.028: INFO: Pod pod-70e005e3-2a41-433d-9186-86b395f7c5f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:19.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2315" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":166,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:15.257: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 1 14:56:15.352: INFO: Waiting up to 5m0s for pod "pod-492a4916-f56d-4950-972d-f67d3cc8fe30" in namespace "emptydir-2944" to be "Succeeded or Failed" Jan 1 14:56:15.359: INFO: Pod "pod-492a4916-f56d-4950-972d-f67d3cc8fe30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.655597ms Jan 1 14:56:17.365: INFO: Pod "pod-492a4916-f56d-4950-972d-f67d3cc8fe30": Phase="Running", Reason="", readiness=true. Elapsed: 2.013263121s Jan 1 14:56:19.371: INFO: Pod "pod-492a4916-f56d-4950-972d-f67d3cc8fe30": Phase="Running", Reason="", readiness=false. Elapsed: 4.01923889s Jan 1 14:56:21.388: INFO: Pod "pod-492a4916-f56d-4950-972d-f67d3cc8fe30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.0359475s �[1mSTEP�[0m: Saw pod success Jan 1 14:56:21.388: INFO: Pod "pod-492a4916-f56d-4950-972d-f67d3cc8fe30" satisfied condition "Succeeded or Failed" Jan 1 14:56:21.396: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-zwqnic pod pod-492a4916-f56d-4950-972d-f67d3cc8fe30 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:56:21.446: INFO: Waiting for pod pod-492a4916-f56d-4950-972d-f67d3cc8fe30 to disappear Jan 1 14:56:21.455: INFO: Pod pod-492a4916-f56d-4950-972d-f67d3cc8fe30 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:21.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2944" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":190,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:19.156: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Jan 1 14:56:19.198: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Jan 1 14:56:19.205: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 1 14:56:19.205: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Jan 1 14:56:19.216: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 1 14:56:19.216: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Jan 1 14:56:19.235: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Jan 1 14:56:19.235: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Jan 1 14:56:26.311: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:26.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-5610" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":12,"skipped":212,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:21.541: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 1 14:56:21.590: INFO: namespace kubectl-493 Jan 1 14:56:21.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-493 create -f -' Jan 1 14:56:23.573: INFO: stderr: "" Jan 1 14:56:23.573: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 1 14:56:24.588: INFO: Selector matched 1 pods for map[app:agnhost] Jan 1 14:56:24.588: INFO: Found 0 / 1 Jan 1 14:56:25.580: INFO: Selector matched 1 pods for map[app:agnhost] Jan 1 14:56:25.580: INFO: Found 0 / 1 Jan 1 14:56:26.583: INFO: Selector matched 1 pods for map[app:agnhost] Jan 1 14:56:26.583: INFO: Found 1 / 1 Jan 1 14:56:26.583: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 1 14:56:26.589: INFO: Selector matched 1 pods for map[app:agnhost] Jan 1 14:56:26.589: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 1 14:56:26.589: INFO: wait on agnhost-primary startup in kubectl-493 Jan 1 14:56:26.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-493 logs agnhost-primary-k2mjl agnhost-primary' Jan 1 14:56:26.793: INFO: stderr: "" Jan 1 14:56:26.793: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 1 14:56:26.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-493 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 1 14:56:27.011: INFO: stderr: "" Jan 1 14:56:27.011: INFO: stdout: "service/rm2 exposed\n" Jan 1 14:56:27.027: INFO: Service rm2 in namespace kubectl-493 found. �[1mSTEP�[0m: exposing service Jan 1 14:56:29.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-493 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 1 14:56:29.153: INFO: stderr: "" Jan 1 14:56:29.153: INFO: stdout: "service/rm3 exposed\n" Jan 1 14:56:29.170: INFO: Service rm3 in namespace kubectl-493 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:31.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-493" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":8,"skipped":211,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:55:33.420: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 14:55:34.273: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 1 14:55:36.301: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 14:55:38.308: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 14, 55, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 14:55:41.401: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:55:41.433: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API Jan 1 14:55:52.084: INFO: Waiting for webhook configuration to be ready... Jan 1 14:56:02.253: INFO: Waiting for webhook configuration to be ready... Jan 1 14:56:12.632: INFO: Waiting for webhook configuration to be ready... Jan 1 14:56:22.728: INFO: Waiting for webhook configuration to be ready... Jan 1 14:56:32.743: INFO: Waiting for webhook configuration to be ready... Jan 1 14:56:32.743: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForCustomResource(0xc0005a3b80, {0xc0047bd1e0, 0xc}, 0xc00383b720, 0xc0067f8580, 0xb87e7fff) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 +0x7ea k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:224 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00024cb60, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:33.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2964" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2964-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [59.929 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny custom resource creation, update and deletion [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 14:56:32.743: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":1,"skipped":44,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:33.351: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 14:56:33.942: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 14:56:36.965: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:56:36.968: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be denied by the webhook �[1mSTEP�[0m: Creating a custom resource whose deletion would be denied by the webhook �[1mSTEP�[0m: Updating the custom resource with disallowed data should be denied �[1mSTEP�[0m: Deleting the custom resource should be denied �[1mSTEP�[0m: Remove the offending key and value from the custom resource data �[1mSTEP�[0m: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:40.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5617" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5617-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":2,"skipped":44,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:40.245: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:40.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-3955" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":3,"skipped":72,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:40.347: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-e94d00af-4d1d-4402-8e57-07660a0b5520 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 14:56:40.387: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02" in namespace "projected-7988" to be "Succeeded or Failed" Jan 1 14:56:40.391: INFO: Pod "pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059398ms Jan 1 14:56:42.396: INFO: Pod "pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009087172s Jan 1 14:56:44.402: INFO: Pod "pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014749699s �[1mSTEP�[0m: Saw pod success Jan 1 14:56:44.402: INFO: Pod "pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02" satisfied condition "Succeeded or Failed" Jan 1 14:56:44.405: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:56:44.418: INFO: Waiting for pod pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02 to disappear Jan 1 14:56:44.420: INFO: Pod pod-projected-configmaps-39910789-b409-42ea-965c-bb3ee72d1c02 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:44.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7988" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":72,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:44.435: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pod templates Jan 1 14:56:44.468: INFO: created test-podtemplate-1 Jan 1 14:56:44.472: INFO: created test-podtemplate-2 Jan 1 14:56:44.477: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 1 14:56:44.480: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 1 14:56:44.490: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:44.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-4361" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":5,"skipped":73,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:44.588: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create a ReplicaSet �[1mSTEP�[0m: Verify that the required pods have come up Jan 1 14:56:44.618: INFO: Pod name sample-pod: Found 0 pods out of 3 Jan 1 14:56:49.624: INFO: Pod name sample-pod: Found 3 pods out of 3 �[1mSTEP�[0m: ensuring each pod is running Jan 1 14:56:49.627: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} �[1mSTEP�[0m: Listing all ReplicaSets �[1mSTEP�[0m: DeleteCollection of the ReplicaSets �[1mSTEP�[0m: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:56:49.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-2521" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":6,"skipped":135,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:49.664: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-xxzh �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 1 14:56:49.717: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xxzh" in namespace "subpath-4322" to be "Succeeded or Failed" Jan 1 14:56:49.721: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Pending", Reason="", readiness=false. Elapsed: 3.23224ms Jan 1 14:56:51.725: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 2.007922774s Jan 1 14:56:53.729: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 4.011875228s Jan 1 14:56:55.735: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 6.017806197s Jan 1 14:56:57.740: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 8.022806877s Jan 1 14:56:59.747: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 10.029604961s Jan 1 14:57:01.757: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 12.039112603s Jan 1 14:57:03.761: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 14.043668127s Jan 1 14:57:05.766: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 16.048730564s Jan 1 14:57:07.771: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 18.053272501s Jan 1 14:57:09.777: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=true. Elapsed: 20.059744264s Jan 1 14:57:11.782: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Running", Reason="", readiness=false. Elapsed: 22.064933733s Jan 1 14:57:13.787: INFO: Pod "pod-subpath-test-configmap-xxzh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069711328s �[1mSTEP�[0m: Saw pod success Jan 1 14:57:13.787: INFO: Pod "pod-subpath-test-configmap-xxzh" satisfied condition "Succeeded or Failed" Jan 1 14:57:13.791: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-subpath-test-configmap-xxzh container test-container-subpath-configmap-xxzh: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:57:13.811: INFO: Waiting for pod pod-subpath-test-configmap-xxzh to disappear Jan 1 14:57:13.814: INFO: Pod pod-subpath-test-configmap-xxzh no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-xxzh Jan 1 14:57:13.814: INFO: Deleting pod "pod-subpath-test-configmap-xxzh" in namespace "subpath-4322" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:13.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4322" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":7,"skipped":138,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:13.830: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-f478b015-2288-406b-81d5-1d9a07361d58 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 1 14:57:13.879: INFO: Waiting up to 5m0s for pod "pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7" in namespace "secrets-1403" to be "Succeeded or Failed" Jan 1 14:57:13.885: INFO: Pod "pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.599805ms Jan 1 14:57:15.892: INFO: Pod "pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01321017s Jan 1 14:57:17.897: INFO: Pod "pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018305561s Jan 1 14:57:19.902: INFO: Pod "pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022912717s �[1mSTEP�[0m: Saw pod success Jan 1 14:57:19.902: INFO: Pod "pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7" satisfied condition "Succeeded or Failed" Jan 1 14:57:19.905: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7 container secret-env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:57:19.924: INFO: Waiting for pod pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7 to disappear Jan 1 14:57:19.928: INFO: Pod pod-secrets-89706e8e-9dde-47ac-bc81-85ebc3aca2b7 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:19.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1403" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":139,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:20.012: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:57:20.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7833 version' Jan 1 14:57:20.139: INFO: stderr: "" Jan 1 14:57:20.139: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.15\", GitCommit:\"b84cb8ab29366daa1bba65bc67f54de2f6c34848\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T10:49:13Z\", GoVersion:\"go1.17.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.15\", GitCommit:\"d34db33f\", GitTreeState:\"clean\", BuildDate:\"2022-12-20T18:31:45Z\", GoVersion:\"go1.17.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:20.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7833" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":9,"skipped":166,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:26.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 14:56:27.293: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 14:56:30.316: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API Jan 1 14:56:40.345: INFO: Waiting for webhook configuration to be ready... Jan 1 14:56:50.454: INFO: Waiting for webhook configuration to be ready... Jan 1 14:57:00.566: INFO: Waiting for webhook configuration to be ready... Jan 1 14:57:10.663: INFO: Waiting for webhook configuration to be ready... Jan 1 14:57:20.673: INFO: Waiting for webhook configuration to be ready... Jan 1 14:57:20.674: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00007c210>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForPod(0xc000272c60, {0xc002298d30, 0xc}, 0xc003183220, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 +0x745 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:262 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000bba340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:20.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6234" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6234-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.368 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate pod and apply defaults after mutation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 14:57:20.674: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00007c210>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 �[90m------------------------------�[0m [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:18.974: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-2955 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-2955 �[1mSTEP�[0m: Deleting pre-stop pod Jan 1 14:56:28.075: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:56:33.074: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:56:38.074: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:56:43.073: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:56:48.074: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:56:53.073: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:56:58.076: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:03.074: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:08.075: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:13.075: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:18.075: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:23.075: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:23.084: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:23.085: FAIL: validating pre-stop. Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/node.testPreStop({0x7b06bd0, 0xc00292a300}, {0xc00326f9b0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151 +0x1125 k8s.io/kubernetes/test/e2e/node.glob..func11.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +0x31 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0002bed00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:23.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-2955" for this suite. �[91m�[1m• Failure [64.138 seconds]�[0m [sig-node] PreStop �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23�[0m �[91m�[1mshould call prestop when killing a pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 14:57:23.085: validating pre-stop. Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151 �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:20.190: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in container's command Jan 1 14:57:20.233: INFO: Waiting up to 5m0s for pod "var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12" in namespace "var-expansion-6653" to be "Succeeded or Failed" Jan 1 14:57:20.239: INFO: Pod "var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12": Phase="Pending", Reason="", readiness=false. Elapsed: 5.960732ms Jan 1 14:57:22.243: INFO: Pod "var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010766598s Jan 1 14:57:24.250: INFO: Pod "var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017826608s �[1mSTEP�[0m: Saw pod success Jan 1 14:57:24.250: INFO: Pod "var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12" satisfied condition "Succeeded or Failed" Jan 1 14:57:24.254: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:57:24.273: INFO: Waiting for pod var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12 to disappear Jan 1 14:57:24.277: INFO: Pod var-expansion-65cbb7c4-779c-4d3d-b1e0-9575e7eb5e12 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:24.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-6653" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":182,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:24.294: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:24.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-602" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":183,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:24.443: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:24.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-7016" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":12,"skipped":220,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:24.692: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 1 14:57:24.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9219 create -f -' Jan 1 14:57:26.171: INFO: stderr: "" Jan 1 14:57:26.171: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 1 14:57:27.177: INFO: Selector matched 1 pods for map[app:agnhost] Jan 1 14:57:27.177: INFO: Found 1 / 1 Jan 1 14:57:27.177: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 1 14:57:27.182: INFO: Selector matched 1 pods for map[app:agnhost] Jan 1 14:57:27.182: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 1 14:57:27.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9219 patch pod agnhost-primary-59frg -p {"metadata":{"annotations":{"x":"y"}}}' Jan 1 14:57:27.283: INFO: stderr: "" Jan 1 14:57:27.283: INFO: stdout: "pod/agnhost-primary-59frg patched\n" �[1mSTEP�[0m: checking annotations Jan 1 14:57:27.288: INFO: Selector matched 1 pods for map[app:agnhost] Jan 1 14:57:27.288: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:27.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9219" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":13,"skipped":256,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:27.329: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:57:27.369: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 1 14:57:27.390: INFO: The status of Pod pod-logs-websocket-70bdc561-40f0-4320-8f0b-65c906024cb9 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:57:29.396: INFO: The status of Pod pod-logs-websocket-70bdc561-40f0-4320-8f0b-65c906024cb9 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:29.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8679" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":263,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:29.509: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support --unix-socket=/path [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Starting the proxy Jan 1 14:57:29.546: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4262 proxy --unix-socket=/tmp/kubectl-proxy-unix1163116796/test' �[1mSTEP�[0m: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:57:29.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4262" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":-1,"completed":15,"skipped":287,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":12,"skipped":223,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:20.743: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 14:57:21.546: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 14:57:24.576: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API Jan 1 14:57:34.602: INFO: Waiting for webhook configuration to be ready... Jan 1 14:57:44.720: INFO: Waiting for webhook configuration to be ready... Jan 1 14:57:54.819: INFO: Waiting for webhook configuration to be ready... Jan 1 14:58:04.914: INFO: Waiting for webhook configuration to be ready... Jan 1 14:58:14.924: INFO: Waiting for webhook configuration to be ready... Jan 1 14:58:14.925: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00007c210>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForPod(0xc000272c60, {0xc003689270, 0xb}, 0xc003d878b0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 +0x745 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:262 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000bba340, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:14.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-959" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-959-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.263 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate pod and apply defaults after mutation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 14:58:14.925: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00007c210>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":12,"skipped":223,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:15.008: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 14:58:15.542: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 14:58:18.564: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:18.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2635" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2635-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":13,"skipped":223,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:18.687: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/discovery.k8s.io �[1mSTEP�[0m: getting /apis/discovery.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 1 14:58:18.741: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 1 14:58:18.746: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 1 14:58:18.759: INFO: waiting for watch events with expected annotations Jan 1 14:58:18.759: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:18.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-1637" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":14,"skipped":224,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:18.805: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 14:58:18.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9" in namespace "downward-api-2098" to be "Succeeded or Failed" Jan 1 14:58:18.839: INFO: Pod "downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.728677ms Jan 1 14:58:20.843: INFO: Pod "downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006847622s Jan 1 14:58:22.848: INFO: Pod "downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011088901s �[1mSTEP�[0m: Saw pod success Jan 1 14:58:22.848: INFO: Pod "downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9" satisfied condition "Succeeded or Failed" Jan 1 14:58:22.850: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:58:22.865: INFO: Waiting for pod downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9 to disappear Jan 1 14:58:22.867: INFO: Pod downwardapi-volume-1bd0ed76-90d7-4483-8e40-c4a7bd0921f9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:22.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2098" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":235,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:22.879: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating an Endpoint �[1mSTEP�[0m: waiting for available Endpoint �[1mSTEP�[0m: listing all Endpoints �[1mSTEP�[0m: updating the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: patching the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: deleting the Endpoint by Collection �[1mSTEP�[0m: waiting for Endpoint deletion �[1mSTEP�[0m: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:22.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8349" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":16,"skipped":237,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:22.994: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Service �[1mSTEP�[0m: watching for the Service to be added Jan 1 14:58:23.029: INFO: Found Service test-service-mk964 in namespace services-9365 with labels: map[test-service-static:true] & ports [{http TCP <nil> 80 {0 80 } 0}] Jan 1 14:58:23.029: INFO: Service test-service-mk964 created �[1mSTEP�[0m: Getting /status Jan 1 14:58:23.037: INFO: Service test-service-mk964 has LoadBalancer: {[]} �[1mSTEP�[0m: patching the ServiceStatus �[1mSTEP�[0m: watching for the Service to be patched Jan 1 14:58:23.050: INFO: observed Service test-service-mk964 in namespace services-9365 with annotations: map[] & LoadBalancer: {[]} Jan 1 14:58:23.050: INFO: Found Service test-service-mk964 in namespace services-9365 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Jan 1 14:58:23.050: INFO: Service test-service-mk964 has service status patched �[1mSTEP�[0m: updating the ServiceStatus Jan 1 14:58:23.065: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Service to be updated Jan 1 14:58:23.068: INFO: Observed Service test-service-mk964 in namespace services-9365 with annotations: map[] & Conditions: {[]} Jan 1 14:58:23.068: INFO: Observed event: &Service{ObjectMeta:{test-service-mk964 services-9365 f63cf912-5ce5-45a2-bb03-ae390557b274 6289 0 2023-01-01 14:58:23 +0000 UTC <nil> <nil> map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2023-01-01 14:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2023-01-01 14:58:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.136.153.227,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.136.153.227],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Jan 1 14:58:23.068: INFO: Found Service test-service-mk964 in namespace services-9365 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Jan 1 14:58:23.068: INFO: Service test-service-mk964 has service status updated �[1mSTEP�[0m: patching the service �[1mSTEP�[0m: watching for the Service to be patched Jan 1 14:58:23.086: INFO: observed Service test-service-mk964 in namespace services-9365 with labels: map[test-service-static:true] Jan 1 14:58:23.087: INFO: observed Service test-service-mk964 in namespace services-9365 with labels: map[test-service-static:true] Jan 1 14:58:23.087: INFO: observed Service test-service-mk964 in namespace services-9365 with labels: map[test-service-static:true] Jan 1 14:58:23.087: INFO: Found Service test-service-mk964 in namespace services-9365 with labels: map[test-service:patched test-service-static:true] Jan 1 14:58:23.087: INFO: Service test-service-mk964 patched �[1mSTEP�[0m: deleting the service �[1mSTEP�[0m: watching for the Service to be deleted Jan 1 14:58:23.108: INFO: Observed event: ADDED Jan 1 14:58:23.108: INFO: Observed event: MODIFIED Jan 1 14:58:23.109: INFO: Observed event: MODIFIED Jan 1 14:58:23.109: INFO: Observed event: MODIFIED Jan 1 14:58:23.109: INFO: Found Service test-service-mk964 in namespace services-9365 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Jan 1 14:58:23.109: INFO: Service test-service-mk964 deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:23.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9365" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":17,"skipped":269,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:23.131: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a secret �[1mSTEP�[0m: listing secrets in all namespaces to ensure that there are more than zero �[1mSTEP�[0m: patching the secret �[1mSTEP�[0m: deleting the secret using a LabelSelector �[1mSTEP�[0m: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:23.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6593" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":18,"skipped":273,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":9,"skipped":170,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:23.115: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-7292 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-7292 �[1mSTEP�[0m: Deleting pre-stop pod Jan 1 14:57:32.218: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:37.218: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:42.218: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:47.217: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:52.217: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:57:57.217: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:02.217: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:07.218: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:12.218: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:17.217: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:22.218: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:27.217: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:27.220: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 1 14:58:27.220: FAIL: validating pre-stop. Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/node.testPreStop({0x7b06bd0, 0xc002f7a000}, {0xc003a72120, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151 +0x1125 k8s.io/kubernetes/test/e2e/node.glob..func11.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +0x31 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0002bed00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:27.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-7292" for this suite. �[91m�[1m• Failure [64.127 seconds]�[0m [sig-node] PreStop �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23�[0m �[91m�[1mshould call prestop when killing a pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 14:58:27.220: validating pre-stop. Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151 �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:23.254: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-3539/configmap-test-52aaddd8-7a0b-45d2-8c88-bb655a2f9951 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 14:58:23.280: INFO: Waiting up to 5m0s for pod "pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658" in namespace "configmap-3539" to be "Succeeded or Failed" Jan 1 14:58:23.283: INFO: Pod "pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025149ms Jan 1 14:58:25.288: INFO: Pod "pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007227126s Jan 1 14:58:27.291: INFO: Pod "pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010821155s �[1mSTEP�[0m: Saw pod success Jan 1 14:58:27.291: INFO: Pod "pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658" satisfied condition "Succeeded or Failed" Jan 1 14:58:27.295: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658 container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:58:27.310: INFO: Waiting for pod pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658 to disappear Jan 1 14:58:27.313: INFO: Pod pod-configmaps-6a1acda2-6a4b-4c0b-8e5b-f181af567658 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:27.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3539" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":314,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:27.328: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Jan 1 14:58:27.349: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Jan 1 14:58:27.349: INFO: cleanMinorVersion: 23 Jan 1 14:58:27.349: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:27.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-1378" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":20,"skipped":316,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:27.360: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 14:58:27.398: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8" in namespace "projected-1695" to be "Succeeded or Failed" Jan 1 14:58:27.403: INFO: Pod "downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.988593ms Jan 1 14:58:29.406: INFO: Pod "downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008588032s Jan 1 14:58:31.411: INFO: Pod "downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01292642s �[1mSTEP�[0m: Saw pod success Jan 1 14:58:31.411: INFO: Pod "downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8" satisfied condition "Succeeded or Failed" Jan 1 14:58:31.417: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:58:31.429: INFO: Waiting for pod downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8 to disappear Jan 1 14:58:31.431: INFO: Pod downwardapi-volume-4036eb49-c722-4b52-9762-a3a74c1151f8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:31.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1695" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":316,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:31.449: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-7ce4f942-8dd6-4542-ac05-31b1054e83b3 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 1 14:58:31.484: INFO: Waiting up to 5m0s for pod "pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d" in namespace "secrets-2312" to be "Succeeded or Failed" Jan 1 14:58:31.487: INFO: Pod "pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.563551ms Jan 1 14:58:33.493: INFO: Pod "pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d": Phase="Running", Reason="", readiness=false. Elapsed: 2.008829573s Jan 1 14:58:35.497: INFO: Pod "pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013686378s �[1mSTEP�[0m: Saw pod success Jan 1 14:58:35.498: INFO: Pod "pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d" satisfied condition "Succeeded or Failed" Jan 1 14:58:35.501: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:58:35.515: INFO: Waiting for pod pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d to disappear Jan 1 14:58:35.517: INFO: Pod pod-secrets-18c46e70-2a0f-4ff5-8591-5b5ac1a5e21d no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:35.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2312" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":322,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":9,"skipped":170,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:27.244: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-923 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-923 �[1mSTEP�[0m: Deleting pre-stop pod Jan 1 14:58:36.307: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:36.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-923" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":10,"skipped":170,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:35.536: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Jan 1 14:58:35.561: INFO: Waiting up to 5m0s for pod "pod-bc50804a-1654-4a26-8cc2-3aa989a9e686" in namespace "emptydir-4748" to be "Succeeded or Failed" Jan 1 14:58:35.564: INFO: Pod "pod-bc50804a-1654-4a26-8cc2-3aa989a9e686": Phase="Pending", Reason="", readiness=false. Elapsed: 3.535825ms Jan 1 14:58:37.569: INFO: Pod "pod-bc50804a-1654-4a26-8cc2-3aa989a9e686": Phase="Running", Reason="", readiness=false. Elapsed: 2.00779504s Jan 1 14:58:39.572: INFO: Pod "pod-bc50804a-1654-4a26-8cc2-3aa989a9e686": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011546958s �[1mSTEP�[0m: Saw pod success Jan 1 14:58:39.572: INFO: Pod "pod-bc50804a-1654-4a26-8cc2-3aa989a9e686" satisfied condition "Succeeded or Failed" Jan 1 14:58:39.575: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-bc50804a-1654-4a26-8cc2-3aa989a9e686 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:58:39.589: INFO: Waiting for pod pod-bc50804a-1654-4a26-8cc2-3aa989a9e686 to disappear Jan 1 14:58:39.591: INFO: Pod pod-bc50804a-1654-4a26-8cc2-3aa989a9e686 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:39.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4748" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":330,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:56:31.202: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-3912 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-3912 to expose endpoints map[] Jan 1 14:56:31.249: INFO: successfully validated that service multi-endpoint-test in namespace services-3912 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-3912 Jan 1 14:56:31.264: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:56:33.267: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-3912 to expose endpoints map[pod1:[100]] Jan 1 14:56:33.312: INFO: successfully validated that service multi-endpoint-test in namespace services-3912 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-3912 Jan 1 14:56:33.324: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:56:35.328: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-3912 to expose endpoints map[pod1:[100] pod2:[101]] Jan 1 14:56:35.344: INFO: successfully validated that service multi-endpoint-test in namespace services-3912 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Jan 1 14:56:35.344: INFO: Creating new exec pod Jan 1 14:56:38.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:56:40.549: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:56:40.549: INFO: stdout: "" Jan 1 14:56:41.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:56:43.731: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:56:43.731: INFO: stdout: "" Jan 1 14:56:44.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:56:46.745: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:56:46.745: INFO: stdout: "" Jan 1 14:56:47.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:56:49.740: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:56:49.740: INFO: stdout: "" Jan 1 14:56:50.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:56:52.704: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:56:52.704: INFO: stdout: "" Jan 1 14:56:53.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:56:55.699: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:56:55.700: INFO: stdout: "" Jan 1 14:56:56.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:56:58.723: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:56:58.723: INFO: stdout: "" Jan 1 14:56:59.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:01.801: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:01.801: INFO: stdout: "" Jan 1 14:57:02.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:04.735: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:04.735: INFO: stdout: "" Jan 1 14:57:05.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:07.770: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:07.770: INFO: stdout: "" Jan 1 14:57:08.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:10.771: INFO: stderr: "+ + ncecho -v -t hostName -w\n 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:10.771: INFO: stdout: "" Jan 1 14:57:11.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:13.750: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:13.751: INFO: stdout: "" Jan 1 14:57:14.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:16.729: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:16.730: INFO: stdout: "" Jan 1 14:57:17.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:19.820: INFO: stderr: "+ + ncecho -v -t hostName -w\n 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:19.820: INFO: stdout: "" Jan 1 14:57:20.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:22.725: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:22.725: INFO: stdout: "" Jan 1 14:57:23.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:25.799: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:25.799: INFO: stdout: "" Jan 1 14:57:26.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:28.758: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:28.758: INFO: stdout: "" Jan 1 14:57:29.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:31.787: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:31.787: INFO: stdout: "" Jan 1 14:57:32.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:34.709: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:34.709: INFO: stdout: "" Jan 1 14:57:35.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:37.691: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:37.691: INFO: stdout: "" Jan 1 14:57:38.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:40.723: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:40.723: INFO: stdout: "" Jan 1 14:57:41.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:43.720: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:43.720: INFO: stdout: "" Jan 1 14:57:44.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:46.702: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:46.702: INFO: stdout: "" Jan 1 14:57:47.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:49.709: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:49.709: INFO: stdout: "" Jan 1 14:57:50.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:52.692: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:52.692: INFO: stdout: "" Jan 1 14:57:53.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:55.720: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:55.720: INFO: stdout: "" Jan 1 14:57:56.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:57:58.707: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:57:58.707: INFO: stdout: "" Jan 1 14:57:59.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:01.698: INFO: stderr: "+ + ncecho hostName\n -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:01.698: INFO: stdout: "" Jan 1 14:58:02.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:04.697: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:04.697: INFO: stdout: "" Jan 1 14:58:05.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:07.707: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:07.707: INFO: stdout: "" Jan 1 14:58:08.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:10.711: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:10.711: INFO: stdout: "" Jan 1 14:58:11.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:13.701: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:13.701: INFO: stdout: "" Jan 1 14:58:14.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:16.702: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:16.702: INFO: stdout: "" Jan 1 14:58:17.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:19.774: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:19.774: INFO: stdout: "" Jan 1 14:58:20.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:22.705: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:22.705: INFO: stdout: "" Jan 1 14:58:23.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:25.729: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:25.729: INFO: stdout: "" Jan 1 14:58:26.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:28.710: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:28.710: INFO: stdout: "" Jan 1 14:58:29.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:31.693: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:31.693: INFO: stdout: "" Jan 1 14:58:32.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:34.710: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:34.710: INFO: stdout: "" Jan 1 14:58:35.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:37.699: INFO: stderr: "+ echo hostName+ \nnc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:37.699: INFO: stdout: "" Jan 1 14:58:38.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:40.716: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:40.716: INFO: stdout: "" Jan 1 14:58:40.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3912 exec execpodvztzg -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:58:42.881: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:58:42.881: INFO: stdout: "" Jan 1 14:58:42.881: FAIL: Unexpected error: <*errors.errorString | 0xc003a88040>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 +0x7c6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006036c0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:42.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3912" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [131.783 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould serve multiport endpoints from pods [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 14:58:42.881: Unexpected error: <*errors.errorString | 0xc003a88040>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:39.757: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with the kernel.shm_rmid_forced sysctl �[1mSTEP�[0m: Watching for error events or started pod �[1mSTEP�[0m: Waiting for pod completion �[1mSTEP�[0m: Checking that the pod succeeded �[1mSTEP�[0m: Getting logs from the pod �[1mSTEP�[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:43.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-8232" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:36.335: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-325 �[1mSTEP�[0m: changing the ExternalName service to type=ClusterIP �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-325 I0101 14:58:36.387429 16 runners.go:193] Created replication controller with name: externalname-service, namespace: services-325, replica count: 2 I0101 14:58:39.440245 16 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 14:58:39.440: INFO: Creating new exec pod Jan 1 14:58:42.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-325 exec execpodwrlrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 1 14:58:42.613: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 1 14:58:42.613: INFO: stdout: "externalname-service-pbl5l" Jan 1 14:58:42.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-325 exec execpodwrlrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.139.234 80' Jan 1 14:58:42.772: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.143.139.234 80\nConnection to 10.143.139.234 80 port [tcp/http] succeeded!\n" Jan 1 14:58:42.772: INFO: stdout: "" Jan 1 14:58:43.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-325 exec execpodwrlrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.139.234 80' Jan 1 14:58:43.999: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 10.143.139.234 80\nConnection to 10.143.139.234 80 port [tcp/http] succeeded!\n" Jan 1 14:58:43.999: INFO: stdout: "" Jan 1 14:58:44.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-325 exec execpodwrlrj -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.139.234 80' Jan 1 14:58:45.121: INFO: stderr: "+ nc -v -t -w 2 10.143.139.234 80\nConnection to 10.143.139.234 80 port [tcp/http] succeeded!\n+ echo hostName\n" Jan 1 14:58:45.122: INFO: stdout: "externalname-service-mqzh2" Jan 1 14:58:45.122: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:58:45.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-325" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":11,"skipped":175,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:45.297: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 1 14:58:45.483: INFO: Waiting up to 5m0s for pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda" in namespace "downward-api-4539" to be "Succeeded or Failed" Jan 1 14:58:45.499: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 16.086815ms Jan 1 14:58:47.526: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042439848s Jan 1 14:58:49.559: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075524549s Jan 1 14:58:51.592: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.109241292s Jan 1 14:58:53.602: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.118768585s Jan 1 14:58:55.609: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 10.125946247s Jan 1 14:58:57.618: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 12.134665677s Jan 1 14:58:59.623: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Pending", Reason="", readiness=false. Elapsed: 14.139540172s Jan 1 14:59:01.654: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.170975569s �[1mSTEP�[0m: Saw pod success Jan 1 14:59:01.654: INFO: Pod "downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda" satisfied condition "Succeeded or Failed" Jan 1 14:59:01.667: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:59:01.879: INFO: Waiting for pod downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda to disappear Jan 1 14:59:01.891: INFO: Pod downward-api-3c6ffbb2-ff9a-42cb-9b76-d215cfcfabda no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:01.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4539" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":186,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":24,"skipped":450,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:43.813: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted Jan 1 14:58:55.460: INFO: 69 pods remaining Jan 1 14:58:55.460: INFO: 69 pods has nil DeletionTimestamp Jan 1 14:58:55.460: INFO: �[1mSTEP�[0m: Gathering metrics Jan 1 14:59:00.462: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-upqhfa-bk7tk-vbnvt is Running (Ready = true) Jan 1 14:59:00.524: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Jan 1 14:59:00.524: INFO: Deleting pod "simpletest-rc-to-be-deleted-258dv" in namespace "gc-893" Jan 1 14:59:00.534: INFO: Deleting pod "simpletest-rc-to-be-deleted-2jmqb" in namespace "gc-893" Jan 1 14:59:00.545: INFO: Deleting pod "simpletest-rc-to-be-deleted-2lsxc" in namespace "gc-893" Jan 1 14:59:00.554: INFO: Deleting pod "simpletest-rc-to-be-deleted-2ngq4" in namespace "gc-893" Jan 1 14:59:00.561: INFO: Deleting pod "simpletest-rc-to-be-deleted-2rzb5" in namespace "gc-893" Jan 1 14:59:00.570: INFO: Deleting pod "simpletest-rc-to-be-deleted-425qk" in namespace "gc-893" Jan 1 14:59:00.588: INFO: Deleting pod "simpletest-rc-to-be-deleted-47l9h" in namespace "gc-893" Jan 1 14:59:00.596: INFO: Deleting pod "simpletest-rc-to-be-deleted-49slf" in namespace "gc-893" Jan 1 14:59:00.608: INFO: Deleting pod "simpletest-rc-to-be-deleted-4hzj8" in namespace "gc-893" Jan 1 14:59:00.626: INFO: Deleting pod "simpletest-rc-to-be-deleted-4p9w6" in namespace "gc-893" Jan 1 14:59:00.642: INFO: Deleting pod "simpletest-rc-to-be-deleted-4sd8n" in namespace "gc-893" Jan 1 14:59:00.654: INFO: Deleting pod "simpletest-rc-to-be-deleted-4tbsn" in namespace "gc-893" Jan 1 14:59:00.672: INFO: Deleting pod "simpletest-rc-to-be-deleted-545f7" in namespace "gc-893" Jan 1 14:59:00.704: INFO: Deleting pod "simpletest-rc-to-be-deleted-5bj8v" in namespace "gc-893" Jan 1 14:59:00.749: INFO: Deleting pod "simpletest-rc-to-be-deleted-5ht8r" in namespace "gc-893" Jan 1 14:59:00.777: INFO: Deleting pod "simpletest-rc-to-be-deleted-5n6km" in namespace "gc-893" Jan 1 14:59:00.789: INFO: Deleting pod "simpletest-rc-to-be-deleted-5xn9n" in namespace "gc-893" Jan 1 14:59:00.822: INFO: Deleting pod "simpletest-rc-to-be-deleted-69qc2" in namespace "gc-893" Jan 1 14:59:00.845: INFO: Deleting pod "simpletest-rc-to-be-deleted-7gn8g" in namespace "gc-893" Jan 1 14:59:00.882: INFO: Deleting pod "simpletest-rc-to-be-deleted-7pm2n" in namespace "gc-893" Jan 1 14:59:00.922: INFO: Deleting pod "simpletest-rc-to-be-deleted-7t65b" in namespace "gc-893" Jan 1 14:59:00.955: INFO: Deleting pod "simpletest-rc-to-be-deleted-7w9ts" in namespace "gc-893" Jan 1 14:59:01.008: INFO: Deleting pod "simpletest-rc-to-be-deleted-8894m" in namespace "gc-893" Jan 1 14:59:01.032: INFO: Deleting pod "simpletest-rc-to-be-deleted-8cstv" in namespace "gc-893" Jan 1 14:59:01.055: INFO: Deleting pod "simpletest-rc-to-be-deleted-8g4rk" in namespace "gc-893" Jan 1 14:59:01.073: INFO: Deleting pod "simpletest-rc-to-be-deleted-8l9j6" in namespace "gc-893" Jan 1 14:59:01.133: INFO: Deleting pod "simpletest-rc-to-be-deleted-8shcv" in namespace "gc-893" Jan 1 14:59:01.155: INFO: Deleting pod "simpletest-rc-to-be-deleted-8t9ps" in namespace "gc-893" Jan 1 14:59:01.176: INFO: Deleting pod "simpletest-rc-to-be-deleted-97ffm" in namespace "gc-893" Jan 1 14:59:01.204: INFO: Deleting pod "simpletest-rc-to-be-deleted-9llxw" in namespace "gc-893" Jan 1 14:59:01.224: INFO: Deleting pod "simpletest-rc-to-be-deleted-9xwtr" in namespace "gc-893" Jan 1 14:59:01.263: INFO: Deleting pod "simpletest-rc-to-be-deleted-b2ncc" in namespace "gc-893" Jan 1 14:59:01.308: INFO: Deleting pod "simpletest-rc-to-be-deleted-bpv4m" in namespace "gc-893" Jan 1 14:59:01.337: INFO: Deleting pod "simpletest-rc-to-be-deleted-cfzx7" in namespace "gc-893" Jan 1 14:59:01.429: INFO: Deleting pod "simpletest-rc-to-be-deleted-ch872" in namespace "gc-893" Jan 1 14:59:01.453: INFO: Deleting pod "simpletest-rc-to-be-deleted-cjkkc" in namespace "gc-893" Jan 1 14:59:01.464: INFO: Deleting pod "simpletest-rc-to-be-deleted-cqlgl" in namespace "gc-893" Jan 1 14:59:01.488: INFO: Deleting pod "simpletest-rc-to-be-deleted-csxtw" in namespace "gc-893" Jan 1 14:59:01.610: INFO: Deleting pod "simpletest-rc-to-be-deleted-djgtj" in namespace "gc-893" Jan 1 14:59:01.654: INFO: Deleting pod "simpletest-rc-to-be-deleted-dqs5v" in namespace "gc-893" Jan 1 14:59:01.797: INFO: Deleting pod "simpletest-rc-to-be-deleted-dvrgz" in namespace "gc-893" Jan 1 14:59:01.871: INFO: Deleting pod "simpletest-rc-to-be-deleted-f7tgx" in namespace "gc-893" Jan 1 14:59:01.928: INFO: Deleting pod "simpletest-rc-to-be-deleted-fqwch" in namespace "gc-893" Jan 1 14:59:02.010: INFO: Deleting pod "simpletest-rc-to-be-deleted-ftdmm" in namespace "gc-893" Jan 1 14:59:02.052: INFO: Deleting pod "simpletest-rc-to-be-deleted-fw4c6" in namespace "gc-893" Jan 1 14:59:02.168: INFO: Deleting pod "simpletest-rc-to-be-deleted-g6gbb" in namespace "gc-893" Jan 1 14:59:02.219: INFO: Deleting pod "simpletest-rc-to-be-deleted-gdnfl" in namespace "gc-893" Jan 1 14:59:02.312: INFO: Deleting pod "simpletest-rc-to-be-deleted-gkbwb" in namespace "gc-893" Jan 1 14:59:02.459: INFO: Deleting pod "simpletest-rc-to-be-deleted-gngxf" in namespace "gc-893" Jan 1 14:59:02.481: INFO: Deleting pod "simpletest-rc-to-be-deleted-gnqzn" in namespace "gc-893" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:02.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-893" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":25,"skipped":450,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:02.611: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-6dd2aca6-dbea-4ce3-8601-c7ae5f635738 �[1mSTEP�[0m: Creating the pod Jan 1 14:59:02.790: INFO: The status of Pod pod-configmaps-1639f4ea-0d28-4592-925e-6f66b8652422 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:04.793: INFO: The status of Pod pod-configmaps-1639f4ea-0d28-4592-925e-6f66b8652422 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap configmap-test-upd-6dd2aca6-dbea-4ce3-8601-c7ae5f635738 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:06.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4512" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":456,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:02.028: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-2cf0fd4a-a4b3-420c-9110-5729cb2156f6 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 14:59:02.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80" in namespace "projected-9986" to be "Succeeded or Failed" Jan 1 14:59:02.370: INFO: Pod "pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80": Phase="Pending", Reason="", readiness=false. Elapsed: 53.494337ms Jan 1 14:59:04.374: INFO: Pod "pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056784852s Jan 1 14:59:06.377: INFO: Pod "pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80": Phase="Pending", Reason="", readiness=false. Elapsed: 4.059888918s Jan 1 14:59:08.381: INFO: Pod "pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.064019405s �[1mSTEP�[0m: Saw pod success Jan 1 14:59:08.381: INFO: Pod "pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80" satisfied condition "Succeeded or Failed" Jan 1 14:59:08.385: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb pod pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:59:08.408: INFO: Waiting for pod pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80 to disappear Jan 1 14:59:08.411: INFO: Pod pod-projected-configmaps-e7c37abb-2bd9-4465-994d-7defe7861f80 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:08.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9986" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":195,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:06.872: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should succeed in writing subpaths in container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: creating a file in subpath Jan 1 14:59:10.983: INFO: ExecWithOptions {Command:[/bin/sh -c touch /volume_mount/mypath/foo/test.log] Namespace:var-expansion-8486 PodName:var-expansion-cf37447f-9cd8-4e3f-9917-529074eb9b00 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:59:10.983: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:59:10.984: INFO: ExecWithOptions: Clientset creation Jan 1 14:59:10.984: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-8486/pods/var-expansion-cf37447f-9cd8-4e3f-9917-529074eb9b00/exec?command=%2Fbin%2Fsh&command=-c&command=touch+%2Fvolume_mount%2Fmypath%2Ffoo%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: test for file in mounted path Jan 1 14:59:11.065: INFO: ExecWithOptions {Command:[/bin/sh -c test -f /subpath_mount/test.log] Namespace:var-expansion-8486 PodName:var-expansion-cf37447f-9cd8-4e3f-9917-529074eb9b00 ContainerName:dapi-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 14:59:11.065: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 14:59:11.066: INFO: ExecWithOptions: Clientset creation Jan 1 14:59:11.066: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/var-expansion-8486/pods/var-expansion-cf37447f-9cd8-4e3f-9917-529074eb9b00/exec?command=%2Fbin%2Fsh&command=-c&command=test+-f+%2Fsubpath_mount%2Ftest.log&container=dapi-container&container=dapi-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: updating the annotation value Jan 1 14:59:11.662: INFO: Successfully updated pod "var-expansion-cf37447f-9cd8-4e3f-9917-529074eb9b00" �[1mSTEP�[0m: waiting for annotated pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 1 14:59:11.666: INFO: Deleting pod "var-expansion-cf37447f-9cd8-4e3f-9917-529074eb9b00" in namespace "var-expansion-8486" Jan 1 14:59:11.671: INFO: Wait up to 5m0s for pod "var-expansion-cf37447f-9cd8-4e3f-9917-529074eb9b00" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:43.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-8486" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":-1,"completed":27,"skipped":483,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:08.436: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 24.24.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.24.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.24.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.24.24_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8265.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8265.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8265.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8265.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 24.24.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.24.24_udp@PTR;check="$$(dig +tcp +noall +answer +search 24.24.135.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.135.24.24_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 1 14:59:16.520: INFO: Unable to read wheezy_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.524: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.527: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.530: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.544: INFO: Unable to read jessie_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.546: INFO: Unable to read jessie_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.549: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.552: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:16.564: INFO: Lookups using dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004 failed for: [wheezy_udp@dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_udp@dns-test-service.dns-8265.svc.cluster.local jessie_tcp@dns-test-service.dns-8265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local] Jan 1 14:59:21.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.574: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.578: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.582: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.599: INFO: Unable to read jessie_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.606: INFO: Unable to read jessie_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.611: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.617: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:21.630: INFO: Lookups using dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004 failed for: [wheezy_udp@dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_udp@dns-test-service.dns-8265.svc.cluster.local jessie_tcp@dns-test-service.dns-8265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local] Jan 1 14:59:26.571: INFO: Unable to read wheezy_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.574: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.577: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.580: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.595: INFO: Unable to read jessie_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.598: INFO: Unable to read jessie_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.601: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.604: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:26.617: INFO: Lookups using dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004 failed for: [wheezy_udp@dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_udp@dns-test-service.dns-8265.svc.cluster.local jessie_tcp@dns-test-service.dns-8265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local] Jan 1 14:59:31.572: INFO: Unable to read wheezy_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.575: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.578: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.581: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.598: INFO: Unable to read jessie_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:31.622: INFO: Lookups using dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004 failed for: [wheezy_udp@dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_udp@dns-test-service.dns-8265.svc.cluster.local jessie_tcp@dns-test-service.dns-8265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local] Jan 1 14:59:36.569: INFO: Unable to read wheezy_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.579: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.583: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.586: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.601: INFO: Unable to read jessie_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.605: INFO: Unable to read jessie_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.609: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.612: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:36.623: INFO: Lookups using dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004 failed for: [wheezy_udp@dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_udp@dns-test-service.dns-8265.svc.cluster.local jessie_tcp@dns-test-service.dns-8265.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local] Jan 1 14:59:41.570: INFO: Unable to read wheezy_udp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:41.573: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:41.580: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local from pod dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004: the server could not find the requested resource (get pods dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004) Jan 1 14:59:41.623: INFO: Lookups using dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004 failed for: [wheezy_udp@dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@dns-test-service.dns-8265.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-8265.svc.cluster.local] Jan 1 14:59:46.627: INFO: DNS probes using dns-8265/dns-test-2c4f8b81-6ede-43cc-8f50-dfa205554004 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:46.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8265" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":14,"skipped":203,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:43.694: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-3737bd2d-e613-4ab6-96e8-b8440c522062 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 14:59:43.735: INFO: Waiting up to 5m0s for pod "pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056" in namespace "configmap-8825" to be "Succeeded or Failed" Jan 1 14:59:43.739: INFO: Pod "pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056": Phase="Pending", Reason="", readiness=false. Elapsed: 3.531958ms Jan 1 14:59:45.744: INFO: Pod "pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008748563s Jan 1 14:59:47.749: INFO: Pod "pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013507666s �[1mSTEP�[0m: Saw pod success Jan 1 14:59:47.749: INFO: Pod "pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056" satisfied condition "Succeeded or Failed" Jan 1 14:59:47.753: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 14:59:47.772: INFO: Waiting for pod pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056 to disappear Jan 1 14:59:47.775: INFO: Pod pod-configmaps-ce93a9c1-3514-4345-9216-bfb0c6205056 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:47.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8825" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":485,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:46.847: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:59:46.867: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 1 14:59:46.878: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 1 14:59:51.885: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 1 14:59:51.885: INFO: Creating deployment "test-rolling-update-deployment" Jan 1 14:59:51.889: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 1 14:59:51.899: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 1 14:59:53.907: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 1 14:59:53.912: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 1 14:59:53.923: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7055 6515d70a-fa57-4555-beed-1cd521982602 9380 1 2023-01-01 14:59:51 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-01 14:59:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:59:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003553d78 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-01 14:59:51 +0000 UTC,LastTransitionTime:2023-01-01 14:59:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-8656fc4b57" has successfully progressed.,LastUpdateTime:2023-01-01 14:59:53 +0000 UTC,LastTransitionTime:2023-01-01 14:59:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 1 14:59:53.926: INFO: New ReplicaSet "test-rolling-update-deployment-8656fc4b57" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-8656fc4b57 deployment-7055 8bf5cb00-3a8e-4f05-b9f5-13ae710c5f72 9366 1 2023-01-01 14:59:51 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 6515d70a-fa57-4555-beed-1cd521982602 0xc002880dd7 0xc002880dd8}] [] [{kube-controller-manager Update apps/v1 2023-01-01 14:59:51 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6515d70a-fa57-4555-beed-1cd521982602\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:59:53 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 8656fc4b57,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002880ea8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 1 14:59:53.926: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 1 14:59:53.926: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7055 e4a7d94b-125b-4feb-a31f-ea239aa2b2d4 9379 2 2023-01-01 14:59:46 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 6515d70a-fa57-4555-beed-1cd521982602 0xc002880c9f 0xc002880cb0}] [] [{e2e.test Update apps/v1 2023-01-01 14:59:46 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:59:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6515d70a-fa57-4555-beed-1cd521982602\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-01 14:59:53 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002880d78 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 1 14:59:53.929: INFO: Pod "test-rolling-update-deployment-8656fc4b57-njjc6" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-8656fc4b57-njjc6 test-rolling-update-deployment-8656fc4b57- deployment-7055 0ce27e14-dbbd-474b-b019-3ca01818ac71 9365 0 2023-01-01 14:59:51 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-8656fc4b57 8bf5cb00-3a8e-4f05-b9f5-13ae710c5f72 0xc002881307 0xc002881308}] [] [{kube-controller-manager Update v1 2023-01-01 14:59:51 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8bf5cb00-3a8e-4f05-b9f5-13ae710c5f72\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-01 14:59:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.74\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j7f2b,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j7f2b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-upqhfa-worker-zwqnic,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:59:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:59:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:59:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 14:59:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.74,StartTime:2023-01-01 14:59:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-01 14:59:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://7e12ee0dfe0476be9d0b9929ccebe0bab7d19be0dd38d341be6fb2d5c1d46ffd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.74,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:53.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7055" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":15,"skipped":234,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:47.795: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 14:59:47.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-8168 I0101 14:59:47.830579 19 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8168, replica count: 1 I0101 14:59:48.882874 19 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 14:59:48.994: INFO: Created: latency-svc-75wdp Jan 1 14:59:49.005: INFO: Got endpoints: latency-svc-75wdp [22.180227ms] Jan 1 14:59:49.027: INFO: Created: latency-svc-x6bj5 Jan 1 14:59:49.037: INFO: Got endpoints: latency-svc-x6bj5 [31.449448ms] Jan 1 14:59:49.040: INFO: Created: latency-svc-c4lvv Jan 1 14:59:49.045: INFO: Got endpoints: latency-svc-c4lvv [37.089835ms] Jan 1 14:59:49.054: INFO: Created: latency-svc-zrvqf Jan 1 14:59:49.064: INFO: Got endpoints: latency-svc-zrvqf [55.44679ms] Jan 1 14:59:49.065: INFO: Created: latency-svc-wtvkn Jan 1 14:59:49.085: INFO: Got endpoints: latency-svc-wtvkn [77.320213ms] Jan 1 14:59:49.093: INFO: Created: latency-svc-hf92q Jan 1 14:59:49.101: INFO: Got endpoints: latency-svc-hf92q [94.221615ms] Jan 1 14:59:49.110: INFO: Created: latency-svc-fbvkp Jan 1 14:59:49.121: INFO: Got endpoints: latency-svc-fbvkp [112.934054ms] Jan 1 14:59:49.121: INFO: Created: latency-svc-sx77v Jan 1 14:59:49.129: INFO: Got endpoints: latency-svc-sx77v [121.632709ms] Jan 1 14:59:49.140: INFO: Created: latency-svc-jr46s Jan 1 14:59:49.146: INFO: Got endpoints: latency-svc-jr46s [139.169403ms] Jan 1 14:59:49.153: INFO: Created: latency-svc-t8f6g Jan 1 14:59:49.155: INFO: Got endpoints: latency-svc-t8f6g [147.098046ms] Jan 1 14:59:49.161: INFO: Created: latency-svc-qjs6b Jan 1 14:59:49.170: INFO: Got endpoints: latency-svc-qjs6b [162.151246ms] Jan 1 14:59:49.172: INFO: Created: latency-svc-9vs4z Jan 1 14:59:49.182: INFO: Got endpoints: latency-svc-9vs4z [174.874887ms] Jan 1 14:59:49.184: INFO: Created: latency-svc-bq7wg Jan 1 14:59:49.194: INFO: Got endpoints: latency-svc-bq7wg [185.674782ms] Jan 1 14:59:49.199: INFO: Created: latency-svc-srbm8 Jan 1 14:59:49.203: INFO: Got endpoints: latency-svc-srbm8 [196.555375ms] Jan 1 14:59:49.217: INFO: Created: latency-svc-pqbks Jan 1 14:59:49.220: INFO: Got endpoints: latency-svc-pqbks [214.306757ms] Jan 1 14:59:49.228: INFO: Created: latency-svc-k4c4h Jan 1 14:59:49.240: INFO: Got endpoints: latency-svc-k4c4h [232.935092ms] Jan 1 14:59:49.253: INFO: Created: latency-svc-v9jks Jan 1 14:59:49.259: INFO: Got endpoints: latency-svc-v9jks [221.743939ms] Jan 1 14:59:49.269: INFO: Created: latency-svc-z7l85 Jan 1 14:59:49.282: INFO: Got endpoints: latency-svc-z7l85 [237.354727ms] Jan 1 14:59:49.288: INFO: Created: latency-svc-ckfps Jan 1 14:59:49.306: INFO: Got endpoints: latency-svc-ckfps [241.701094ms] Jan 1 14:59:49.308: INFO: Created: latency-svc-cpwfg Jan 1 14:59:49.315: INFO: Got endpoints: latency-svc-cpwfg [229.414678ms] Jan 1 14:59:49.316: INFO: Created: latency-svc-vr4hp Jan 1 14:59:49.325: INFO: Created: latency-svc-gmn9d Jan 1 14:59:49.327: INFO: Got endpoints: latency-svc-vr4hp [225.806176ms] Jan 1 14:59:49.338: INFO: Got endpoints: latency-svc-gmn9d [216.951675ms] Jan 1 14:59:49.340: INFO: Created: latency-svc-ft66n Jan 1 14:59:49.348: INFO: Got endpoints: latency-svc-ft66n [219.690537ms] Jan 1 14:59:49.349: INFO: Created: latency-svc-7bc79 Jan 1 14:59:49.357: INFO: Got endpoints: latency-svc-7bc79 [210.292903ms] Jan 1 14:59:49.360: INFO: Created: latency-svc-zftz9 Jan 1 14:59:49.367: INFO: Got endpoints: latency-svc-zftz9 [211.460015ms] Jan 1 14:59:49.367: INFO: Created: latency-svc-kx759 Jan 1 14:59:49.376: INFO: Got endpoints: latency-svc-kx759 [205.793969ms] Jan 1 14:59:49.378: INFO: Created: latency-svc-dwsd4 Jan 1 14:59:49.384: INFO: Got endpoints: latency-svc-dwsd4 [202.172226ms] Jan 1 14:59:49.387: INFO: Created: latency-svc-qx9wt Jan 1 14:59:49.390: INFO: Got endpoints: latency-svc-qx9wt [196.173891ms] Jan 1 14:59:49.396: INFO: Created: latency-svc-qc5zn Jan 1 14:59:49.404: INFO: Got endpoints: latency-svc-qc5zn [200.984429ms] Jan 1 14:59:49.406: INFO: Created: latency-svc-w5sh5 Jan 1 14:59:49.416: INFO: Got endpoints: latency-svc-w5sh5 [195.782835ms] Jan 1 14:59:49.419: INFO: Created: latency-svc-9nnm9 Jan 1 14:59:49.426: INFO: Got endpoints: latency-svc-9nnm9 [185.141182ms] Jan 1 14:59:49.426: INFO: Created: latency-svc-d8tzc Jan 1 14:59:49.431: INFO: Got endpoints: latency-svc-d8tzc [172.126301ms] Jan 1 14:59:49.446: INFO: Created: latency-svc-mjt59 Jan 1 14:59:49.456: INFO: Got endpoints: latency-svc-mjt59 [174.104232ms] Jan 1 14:59:49.456: INFO: Created: latency-svc-wsgzx Jan 1 14:59:49.463: INFO: Created: latency-svc-hbjcj Jan 1 14:59:49.465: INFO: Got endpoints: latency-svc-wsgzx [159.591222ms] Jan 1 14:59:49.472: INFO: Got endpoints: latency-svc-hbjcj [157.348249ms] Jan 1 14:59:49.586: INFO: Created: latency-svc-5h9n4 Jan 1 14:59:49.594: INFO: Created: latency-svc-glfdh Jan 1 14:59:49.596: INFO: Created: latency-svc-6642q Jan 1 14:59:49.597: INFO: Created: latency-svc-wwmk9 Jan 1 14:59:49.597: INFO: Created: latency-svc-fvdl9 Jan 1 14:59:49.599: INFO: Created: latency-svc-c55w2 Jan 1 14:59:49.608: INFO: Created: latency-svc-kmzq4 Jan 1 14:59:49.608: INFO: Created: latency-svc-rlpd4 Jan 1 14:59:49.608: INFO: Created: latency-svc-nwfrr Jan 1 14:59:49.608: INFO: Created: latency-svc-4nhnd Jan 1 14:59:49.608: INFO: Got endpoints: latency-svc-5h9n4 [251.409427ms] Jan 1 14:59:49.608: INFO: Created: latency-svc-kfvff Jan 1 14:59:49.608: INFO: Got endpoints: latency-svc-glfdh [218.146796ms] Jan 1 14:59:49.609: INFO: Created: latency-svc-dvw58 Jan 1 14:59:49.611: INFO: Created: latency-svc-knt9t Jan 1 14:59:49.611: INFO: Got endpoints: latency-svc-knt9t [272.49652ms] Jan 1 14:59:49.611: INFO: Created: latency-svc-5wthr Jan 1 14:59:49.612: INFO: Created: latency-svc-65dcp Jan 1 14:59:49.613: INFO: Got endpoints: latency-svc-kfvff [286.41194ms] Jan 1 14:59:49.613: INFO: Got endpoints: latency-svc-nwfrr [141.263771ms] Jan 1 14:59:49.614: INFO: Got endpoints: latency-svc-4nhnd [174.485282ms] Jan 1 14:59:49.614: INFO: Got endpoints: latency-svc-kmzq4 [157.805465ms] Jan 1 14:59:49.635: INFO: Created: latency-svc-64tjg Jan 1 14:59:49.649: INFO: Created: latency-svc-5ttsc Jan 1 14:59:49.653: INFO: Got endpoints: latency-svc-wwmk9 [277.718485ms] Jan 1 14:59:49.661: INFO: Created: latency-svc-v49p7 Jan 1 14:59:49.670: INFO: Created: latency-svc-z5tv7 Jan 1 14:59:49.682: INFO: Created: latency-svc-bj6b7 Jan 1 14:59:49.702: INFO: Created: latency-svc-kkwb5 Jan 1 14:59:49.710: INFO: Got endpoints: latency-svc-5wthr [343.145676ms] Jan 1 14:59:49.721: INFO: Created: latency-svc-wfl29 Jan 1 14:59:49.734: INFO: Created: latency-svc-xsjl2 Jan 1 14:59:49.751: INFO: Got endpoints: latency-svc-65dcp [334.874021ms] Jan 1 14:59:49.755: INFO: Created: latency-svc-65v5p Jan 1 14:59:49.766: INFO: Created: latency-svc-wc789 Jan 1 14:59:49.801: INFO: Got endpoints: latency-svc-dvw58 [452.073252ms] Jan 1 14:59:49.812: INFO: Created: latency-svc-xhx7d Jan 1 14:59:49.857: INFO: Got endpoints: latency-svc-rlpd4 [391.250589ms] Jan 1 14:59:49.869: INFO: Created: latency-svc-rwfx9 Jan 1 14:59:49.900: INFO: Got endpoints: latency-svc-6642q [495.299162ms] Jan 1 14:59:49.914: INFO: Created: latency-svc-z95xr Jan 1 14:59:49.951: INFO: Got endpoints: latency-svc-c55w2 [525.389259ms] Jan 1 14:59:49.965: INFO: Created: latency-svc-d65mq Jan 1 14:59:50.000: INFO: Got endpoints: latency-svc-fvdl9 [616.04926ms] Jan 1 14:59:50.015: INFO: Created: latency-svc-cq64l Jan 1 14:59:50.054: INFO: Got endpoints: latency-svc-64tjg [445.909235ms] Jan 1 14:59:50.071: INFO: Created: latency-svc-6h8lz Jan 1 14:59:50.100: INFO: Got endpoints: latency-svc-5ttsc [491.632181ms] Jan 1 14:59:50.113: INFO: Created: latency-svc-pg9tv Jan 1 14:59:50.152: INFO: Got endpoints: latency-svc-v49p7 [541.618955ms] Jan 1 14:59:50.164: INFO: Created: latency-svc-hcq96 Jan 1 14:59:50.204: INFO: Got endpoints: latency-svc-z5tv7 [590.536309ms] Jan 1 14:59:50.216: INFO: Created: latency-svc-4qtpk Jan 1 14:59:50.251: INFO: Got endpoints: latency-svc-bj6b7 [637.178022ms] Jan 1 14:59:50.262: INFO: Created: latency-svc-7zpmw Jan 1 14:59:50.301: INFO: Got endpoints: latency-svc-kkwb5 [685.893175ms] Jan 1 14:59:50.327: INFO: Created: latency-svc-r95dk Jan 1 14:59:50.353: INFO: Got endpoints: latency-svc-wfl29 [737.903295ms] Jan 1 14:59:50.364: INFO: Created: latency-svc-nt9qz Jan 1 14:59:50.400: INFO: Got endpoints: latency-svc-xsjl2 [746.57341ms] Jan 1 14:59:50.412: INFO: Created: latency-svc-b9j2c Jan 1 14:59:50.452: INFO: Got endpoints: latency-svc-65v5p [741.798195ms] Jan 1 14:59:50.461: INFO: Created: latency-svc-lnkvj Jan 1 14:59:50.501: INFO: Got endpoints: latency-svc-wc789 [749.596313ms] Jan 1 14:59:50.513: INFO: Created: latency-svc-4mrb2 Jan 1 14:59:50.550: INFO: Got endpoints: latency-svc-xhx7d [749.212976ms] Jan 1 14:59:50.562: INFO: Created: latency-svc-z249d Jan 1 14:59:50.603: INFO: Got endpoints: latency-svc-rwfx9 [746.422556ms] Jan 1 14:59:50.615: INFO: Created: latency-svc-4s9nh Jan 1 14:59:50.652: INFO: Got endpoints: latency-svc-z95xr [751.94103ms] Jan 1 14:59:50.672: INFO: Created: latency-svc-wqsbr Jan 1 14:59:50.706: INFO: Got endpoints: latency-svc-d65mq [754.571718ms] Jan 1 14:59:50.720: INFO: Created: latency-svc-zzvrn Jan 1 14:59:50.750: INFO: Got endpoints: latency-svc-cq64l [749.647382ms] Jan 1 14:59:50.761: INFO: Created: latency-svc-62kl6 Jan 1 14:59:50.800: INFO: Got endpoints: latency-svc-6h8lz [745.435377ms] Jan 1 14:59:50.813: INFO: Created: latency-svc-v9h6w Jan 1 14:59:50.852: INFO: Got endpoints: latency-svc-pg9tv [752.233524ms] Jan 1 14:59:50.865: INFO: Created: latency-svc-tdh9v Jan 1 14:59:50.906: INFO: Got endpoints: latency-svc-hcq96 [753.262447ms] Jan 1 14:59:50.920: INFO: Created: latency-svc-qzk56 Jan 1 14:59:50.950: INFO: Got endpoints: latency-svc-4qtpk [746.197567ms] Jan 1 14:59:50.963: INFO: Created: latency-svc-r94nw Jan 1 14:59:51.001: INFO: Got endpoints: latency-svc-7zpmw [750.332225ms] Jan 1 14:59:51.017: INFO: Created: latency-svc-fm69d Jan 1 14:59:51.056: INFO: Got endpoints: latency-svc-r95dk [754.54999ms] Jan 1 14:59:51.069: INFO: Created: latency-svc-hqtl5 Jan 1 14:59:51.100: INFO: Got endpoints: latency-svc-nt9qz [747.402559ms] Jan 1 14:59:51.113: INFO: Created: latency-svc-wlnss Jan 1 14:59:51.150: INFO: Got endpoints: latency-svc-b9j2c [749.505302ms] Jan 1 14:59:51.162: INFO: Created: latency-svc-q48x2 Jan 1 14:59:51.202: INFO: Got endpoints: latency-svc-lnkvj [749.592423ms] Jan 1 14:59:51.212: INFO: Created: latency-svc-jxwt7 Jan 1 14:59:51.253: INFO: Got endpoints: latency-svc-4mrb2 [752.270523ms] Jan 1 14:59:51.264: INFO: Created: latency-svc-c9xlw Jan 1 14:59:51.301: INFO: Got endpoints: latency-svc-z249d [751.514232ms] Jan 1 14:59:51.313: INFO: Created: latency-svc-hbhtt Jan 1 14:59:51.350: INFO: Got endpoints: latency-svc-4s9nh [746.490125ms] Jan 1 14:59:51.360: INFO: Created: latency-svc-64r5k Jan 1 14:59:51.401: INFO: Got endpoints: latency-svc-wqsbr [748.931175ms] Jan 1 14:59:51.411: INFO: Created: latency-svc-mxchz Jan 1 14:59:51.450: INFO: Got endpoints: latency-svc-zzvrn [744.16706ms] Jan 1 14:59:51.460: INFO: Created: latency-svc-zsnhn Jan 1 14:59:51.500: INFO: Got endpoints: latency-svc-62kl6 [750.112822ms] Jan 1 14:59:51.513: INFO: Created: latency-svc-tcrr4 Jan 1 14:59:51.554: INFO: Got endpoints: latency-svc-v9h6w [754.251829ms] Jan 1 14:59:51.564: INFO: Created: latency-svc-d586v Jan 1 14:59:51.600: INFO: Got endpoints: latency-svc-tdh9v [746.794288ms] Jan 1 14:59:51.623: INFO: Created: latency-svc-2xdpt Jan 1 14:59:51.651: INFO: Got endpoints: latency-svc-qzk56 [744.960763ms] Jan 1 14:59:51.668: INFO: Created: latency-svc-9tw7b Jan 1 14:59:51.700: INFO: Got endpoints: latency-svc-r94nw [750.36847ms] Jan 1 14:59:51.713: INFO: Created: latency-svc-4zsf8 Jan 1 14:59:51.753: INFO: Got endpoints: latency-svc-fm69d [752.117118ms] Jan 1 14:59:51.765: INFO: Created: latency-svc-2l4cz Jan 1 14:59:51.800: INFO: Got endpoints: latency-svc-hqtl5 [743.915539ms] Jan 1 14:59:51.811: INFO: Created: latency-svc-z57vx Jan 1 14:59:51.853: INFO: Got endpoints: latency-svc-wlnss [752.967604ms] Jan 1 14:59:51.863: INFO: Created: latency-svc-pql5g Jan 1 14:59:51.901: INFO: Got endpoints: latency-svc-q48x2 [751.615554ms] Jan 1 14:59:51.917: INFO: Created: latency-svc-fx6kh Jan 1 14:59:51.951: INFO: Got endpoints: latency-svc-jxwt7 [748.985702ms] Jan 1 14:59:51.961: INFO: Created: latency-svc-gkkz2 Jan 1 14:59:52.001: INFO: Got endpoints: latency-svc-c9xlw [748.228868ms] Jan 1 14:59:52.015: INFO: Created: latency-svc-xk646 Jan 1 14:59:52.059: INFO: Got endpoints: latency-svc-hbhtt [757.887649ms] Jan 1 14:59:52.084: INFO: Created: latency-svc-bq5vl Jan 1 14:59:52.102: INFO: Got endpoints: latency-svc-64r5k [752.054845ms] Jan 1 14:59:52.113: INFO: Created: latency-svc-gznh9 Jan 1 14:59:52.151: INFO: Got endpoints: latency-svc-mxchz [749.601891ms] Jan 1 14:59:52.167: INFO: Created: latency-svc-gml48 Jan 1 14:59:52.202: INFO: Got endpoints: latency-svc-zsnhn [752.316188ms] Jan 1 14:59:52.219: INFO: Created: latency-svc-9hs7c Jan 1 14:59:52.251: INFO: Got endpoints: latency-svc-tcrr4 [750.570371ms] Jan 1 14:59:52.269: INFO: Created: latency-svc-mhpfd Jan 1 14:59:52.300: INFO: Got endpoints: latency-svc-d586v [746.106087ms] Jan 1 14:59:52.314: INFO: Created: latency-svc-zzqv2 Jan 1 14:59:52.351: INFO: Got endpoints: latency-svc-2xdpt [751.509255ms] Jan 1 14:59:52.364: INFO: Created: latency-svc-5vxlh Jan 1 14:59:52.403: INFO: Got endpoints: latency-svc-9tw7b [751.93696ms] Jan 1 14:59:52.446: INFO: Created: latency-svc-l7df9 Jan 1 14:59:52.458: INFO: Got endpoints: latency-svc-4zsf8 [757.893642ms] Jan 1 14:59:52.474: INFO: Created: latency-svc-p58q2 Jan 1 14:59:52.500: INFO: Got endpoints: latency-svc-2l4cz [746.409819ms] Jan 1 14:59:52.517: INFO: Created: latency-svc-pqwll Jan 1 14:59:52.553: INFO: Got endpoints: latency-svc-z57vx [753.579347ms] Jan 1 14:59:52.564: INFO: Created: latency-svc-2ktgc Jan 1 14:59:52.600: INFO: Got endpoints: latency-svc-pql5g [746.47026ms] Jan 1 14:59:52.618: INFO: Created: latency-svc-w5kpw Jan 1 14:59:52.650: INFO: Got endpoints: latency-svc-fx6kh [748.286915ms] Jan 1 14:59:52.664: INFO: Created: latency-svc-ns29k Jan 1 14:59:52.707: INFO: Got endpoints: latency-svc-gkkz2 [756.711084ms] Jan 1 14:59:52.729: INFO: Created: latency-svc-6mtbq Jan 1 14:59:52.751: INFO: Got endpoints: latency-svc-xk646 [749.633805ms] Jan 1 14:59:52.763: INFO: Created: latency-svc-l6jhs Jan 1 14:59:52.800: INFO: Got endpoints: latency-svc-bq5vl [740.916702ms] Jan 1 14:59:52.813: INFO: Created: latency-svc-xs7dp Jan 1 14:59:52.849: INFO: Got endpoints: latency-svc-gznh9 [747.257012ms] Jan 1 14:59:52.862: INFO: Created: latency-svc-k2gl6 Jan 1 14:59:52.899: INFO: Got endpoints: latency-svc-gml48 [747.887758ms] Jan 1 14:59:52.909: INFO: Created: latency-svc-78f7f Jan 1 14:59:52.950: INFO: Got endpoints: latency-svc-9hs7c [747.898722ms] Jan 1 14:59:52.963: INFO: Created: latency-svc-fhl57 Jan 1 14:59:52.999: INFO: Got endpoints: latency-svc-mhpfd [748.021235ms] Jan 1 14:59:53.018: INFO: Created: latency-svc-dxjfg Jan 1 14:59:53.051: INFO: Got endpoints: latency-svc-zzqv2 [750.24747ms] Jan 1 14:59:53.065: INFO: Created: latency-svc-jzrf2 Jan 1 14:59:53.104: INFO: Got endpoints: latency-svc-5vxlh [752.629107ms] Jan 1 14:59:53.116: INFO: Created: latency-svc-4vffv Jan 1 14:59:53.150: INFO: Got endpoints: latency-svc-l7df9 [747.505721ms] Jan 1 14:59:53.166: INFO: Created: latency-svc-2bsv9 Jan 1 14:59:53.202: INFO: Got endpoints: latency-svc-p58q2 [742.57344ms] Jan 1 14:59:53.215: INFO: Created: latency-svc-n7m2f Jan 1 14:59:53.261: INFO: Got endpoints: latency-svc-pqwll [760.968205ms] Jan 1 14:59:53.274: INFO: Created: latency-svc-r27fq Jan 1 14:59:53.300: INFO: Got endpoints: latency-svc-2ktgc [747.009629ms] Jan 1 14:59:53.315: INFO: Created: latency-svc-glnxf Jan 1 14:59:53.349: INFO: Got endpoints: latency-svc-w5kpw [749.449018ms] Jan 1 14:59:53.375: INFO: Created: latency-svc-wcrxp Jan 1 14:59:53.401: INFO: Got endpoints: latency-svc-ns29k [751.512441ms] Jan 1 14:59:53.413: INFO: Created: latency-svc-qrjtn Jan 1 14:59:53.450: INFO: Got endpoints: latency-svc-6mtbq [742.354317ms] Jan 1 14:59:53.463: INFO: Created: latency-svc-wkgbh Jan 1 14:59:53.501: INFO: Got endpoints: latency-svc-l6jhs [749.64459ms] Jan 1 14:59:53.512: INFO: Created: latency-svc-mwrzm Jan 1 14:59:53.551: INFO: Got endpoints: latency-svc-xs7dp [750.965388ms] Jan 1 14:59:53.562: INFO: Created: latency-svc-9fsg6 Jan 1 14:59:53.600: INFO: Got endpoints: latency-svc-k2gl6 [750.889263ms] Jan 1 14:59:53.619: INFO: Created: latency-svc-44g22 Jan 1 14:59:53.651: INFO: Got endpoints: latency-svc-78f7f [752.26961ms] Jan 1 14:59:53.665: INFO: Created: latency-svc-8rs8z Jan 1 14:59:53.703: INFO: Got endpoints: latency-svc-fhl57 [752.596699ms] Jan 1 14:59:53.716: INFO: Created: latency-svc-7wv74 Jan 1 14:59:53.752: INFO: Got endpoints: latency-svc-dxjfg [752.483988ms] Jan 1 14:59:53.763: INFO: Created: latency-svc-5jk6d Jan 1 14:59:53.803: INFO: Got endpoints: latency-svc-jzrf2 [752.497368ms] Jan 1 14:59:53.815: INFO: Created: latency-svc-z9pqz Jan 1 14:59:53.849: INFO: Got endpoints: latency-svc-4vffv [745.184481ms] Jan 1 14:59:53.861: INFO: Created: latency-svc-885sl Jan 1 14:59:53.902: INFO: Got endpoints: latency-svc-2bsv9 [751.804724ms] Jan 1 14:59:53.914: INFO: Created: latency-svc-vpw7j Jan 1 14:59:53.949: INFO: Got endpoints: latency-svc-n7m2f [746.676069ms] Jan 1 14:59:53.964: INFO: Created: latency-svc-b4hz9 Jan 1 14:59:54.001: INFO: Got endpoints: latency-svc-r27fq [740.337179ms] Jan 1 14:59:54.016: INFO: Created: latency-svc-9lq5z Jan 1 14:59:54.052: INFO: Got endpoints: latency-svc-glnxf [751.628093ms] Jan 1 14:59:54.066: INFO: Created: latency-svc-jdq5g Jan 1 14:59:54.100: INFO: Got endpoints: latency-svc-wcrxp [750.867707ms] Jan 1 14:59:54.115: INFO: Created: latency-svc-9vzlc Jan 1 14:59:54.153: INFO: Got endpoints: latency-svc-qrjtn [752.152189ms] Jan 1 14:59:54.167: INFO: Created: latency-svc-nr6r7 Jan 1 14:59:54.200: INFO: Got endpoints: latency-svc-wkgbh [750.508268ms] Jan 1 14:59:54.224: INFO: Created: latency-svc-sz9xj Jan 1 14:59:54.251: INFO: Got endpoints: latency-svc-mwrzm [749.823453ms] Jan 1 14:59:54.264: INFO: Created: latency-svc-fcp5p Jan 1 14:59:54.302: INFO: Got endpoints: latency-svc-9fsg6 [750.365777ms] Jan 1 14:59:54.323: INFO: Created: latency-svc-hwbln Jan 1 14:59:54.353: INFO: Got endpoints: latency-svc-44g22 [753.356815ms] Jan 1 14:59:54.364: INFO: Created: latency-svc-fcfj6 Jan 1 14:59:54.402: INFO: Got endpoints: latency-svc-8rs8z [750.609271ms] Jan 1 14:59:54.417: INFO: Created: latency-svc-fzctv Jan 1 14:59:54.450: INFO: Got endpoints: latency-svc-7wv74 [747.219559ms] Jan 1 14:59:54.473: INFO: Created: latency-svc-5p4x5 Jan 1 14:59:54.500: INFO: Got endpoints: latency-svc-5jk6d [748.383329ms] Jan 1 14:59:54.522: INFO: Created: latency-svc-tr8lb Jan 1 14:59:54.553: INFO: Got endpoints: latency-svc-z9pqz [749.908459ms] Jan 1 14:59:54.565: INFO: Created: latency-svc-8nh2c Jan 1 14:59:54.600: INFO: Got endpoints: latency-svc-885sl [750.852544ms] Jan 1 14:59:54.613: INFO: Created: latency-svc-6q767 Jan 1 14:59:54.655: INFO: Got endpoints: latency-svc-vpw7j [752.238886ms] Jan 1 14:59:54.671: INFO: Created: latency-svc-ld9pf Jan 1 14:59:54.703: INFO: Got endpoints: latency-svc-b4hz9 [753.250981ms] Jan 1 14:59:54.716: INFO: Created: latency-svc-n24bt Jan 1 14:59:54.751: INFO: Got endpoints: latency-svc-9lq5z [749.627691ms] Jan 1 14:59:54.766: INFO: Created: latency-svc-nzr9r Jan 1 14:59:54.801: INFO: Got endpoints: latency-svc-jdq5g [748.372346ms] Jan 1 14:59:54.811: INFO: Created: latency-svc-r7xzb Jan 1 14:59:54.850: INFO: Got endpoints: latency-svc-9vzlc [749.327827ms] Jan 1 14:59:54.865: INFO: Created: latency-svc-d52w2 Jan 1 14:59:54.903: INFO: Got endpoints: latency-svc-nr6r7 [749.517561ms] Jan 1 14:59:54.914: INFO: Created: latency-svc-wz5cb Jan 1 14:59:54.951: INFO: Got endpoints: latency-svc-sz9xj [750.449192ms] Jan 1 14:59:54.961: INFO: Created: latency-svc-db2xs Jan 1 14:59:55.000: INFO: Got endpoints: latency-svc-fcp5p [748.185658ms] Jan 1 14:59:55.014: INFO: Created: latency-svc-t955v Jan 1 14:59:55.050: INFO: Got endpoints: latency-svc-hwbln [747.883647ms] Jan 1 14:59:55.067: INFO: Created: latency-svc-lvmhw Jan 1 14:59:55.102: INFO: Got endpoints: latency-svc-fcfj6 [748.537448ms] Jan 1 14:59:55.116: INFO: Created: latency-svc-vh78f Jan 1 14:59:55.151: INFO: Got endpoints: latency-svc-fzctv [749.2752ms] Jan 1 14:59:55.165: INFO: Created: latency-svc-4nggg Jan 1 14:59:55.200: INFO: Got endpoints: latency-svc-5p4x5 [749.673004ms] Jan 1 14:59:55.213: INFO: Created: latency-svc-g8kkr Jan 1 14:59:55.255: INFO: Got endpoints: latency-svc-tr8lb [754.917938ms] Jan 1 14:59:55.267: INFO: Created: latency-svc-v7p6g Jan 1 14:59:55.302: INFO: Got endpoints: latency-svc-8nh2c [748.461043ms] Jan 1 14:59:55.313: INFO: Created: latency-svc-s2rcm Jan 1 14:59:55.351: INFO: Got endpoints: latency-svc-6q767 [750.20058ms] Jan 1 14:59:55.363: INFO: Created: latency-svc-7xzm8 Jan 1 14:59:55.400: INFO: Got endpoints: latency-svc-ld9pf [745.249942ms] Jan 1 14:59:55.413: INFO: Created: latency-svc-j4jsh Jan 1 14:59:55.450: INFO: Got endpoints: latency-svc-n24bt [746.857265ms] Jan 1 14:59:55.461: INFO: Created: latency-svc-5fr7v Jan 1 14:59:55.504: INFO: Got endpoints: latency-svc-nzr9r [753.226996ms] Jan 1 14:59:55.517: INFO: Created: latency-svc-5zh6x Jan 1 14:59:55.550: INFO: Got endpoints: latency-svc-r7xzb [749.570017ms] Jan 1 14:59:55.562: INFO: Created: latency-svc-6m2fs Jan 1 14:59:55.601: INFO: Got endpoints: latency-svc-d52w2 [750.444455ms] Jan 1 14:59:55.620: INFO: Created: latency-svc-2644r Jan 1 14:59:55.650: INFO: Got endpoints: latency-svc-wz5cb [746.990911ms] Jan 1 14:59:55.666: INFO: Created: latency-svc-rgcx9 Jan 1 14:59:55.705: INFO: Got endpoints: latency-svc-db2xs [753.545455ms] Jan 1 14:59:55.724: INFO: Created: latency-svc-5dq74 Jan 1 14:59:55.752: INFO: Got endpoints: latency-svc-t955v [751.922668ms] Jan 1 14:59:55.764: INFO: Created: latency-svc-8mf8h Jan 1 14:59:55.800: INFO: Got endpoints: latency-svc-lvmhw [750.053822ms] Jan 1 14:59:55.813: INFO: Created: latency-svc-l4gx5 Jan 1 14:59:55.854: INFO: Got endpoints: latency-svc-vh78f [751.448609ms] Jan 1 14:59:55.865: INFO: Created: latency-svc-5q4kh Jan 1 14:59:55.902: INFO: Got endpoints: latency-svc-4nggg [749.956633ms] Jan 1 14:59:55.912: INFO: Created: latency-svc-gjfvt Jan 1 14:59:55.950: INFO: Got endpoints: latency-svc-g8kkr [749.740779ms] Jan 1 14:59:55.962: INFO: Created: latency-svc-sx2fc Jan 1 14:59:56.004: INFO: Got endpoints: latency-svc-v7p6g [749.046114ms] Jan 1 14:59:56.023: INFO: Created: latency-svc-grm4p Jan 1 14:59:56.054: INFO: Got endpoints: latency-svc-s2rcm [752.143264ms] Jan 1 14:59:56.072: INFO: Created: latency-svc-fq8hv Jan 1 14:59:56.102: INFO: Got endpoints: latency-svc-7xzm8 [751.098865ms] Jan 1 14:59:56.115: INFO: Created: latency-svc-tjcvc Jan 1 14:59:56.152: INFO: Got endpoints: latency-svc-j4jsh [751.904248ms] Jan 1 14:59:56.166: INFO: Created: latency-svc-5pp78 Jan 1 14:59:56.202: INFO: Got endpoints: latency-svc-5fr7v [752.062717ms] Jan 1 14:59:56.215: INFO: Created: latency-svc-ncfrj Jan 1 14:59:56.250: INFO: Got endpoints: latency-svc-5zh6x [746.067052ms] Jan 1 14:59:56.262: INFO: Created: latency-svc-hsmpw Jan 1 14:59:56.304: INFO: Got endpoints: latency-svc-6m2fs [753.338154ms] Jan 1 14:59:56.314: INFO: Created: latency-svc-ssdq5 Jan 1 14:59:56.355: INFO: Got endpoints: latency-svc-2644r [754.597267ms] Jan 1 14:59:56.368: INFO: Created: latency-svc-p2pdt Jan 1 14:59:56.400: INFO: Got endpoints: latency-svc-rgcx9 [749.540152ms] Jan 1 14:59:56.411: INFO: Created: latency-svc-c2kxf Jan 1 14:59:56.450: INFO: Got endpoints: latency-svc-5dq74 [745.142841ms] Jan 1 14:59:56.469: INFO: Created: latency-svc-m5chm Jan 1 14:59:56.502: INFO: Got endpoints: latency-svc-8mf8h [749.940151ms] Jan 1 14:59:56.516: INFO: Created: latency-svc-cpt4x Jan 1 14:59:56.550: INFO: Got endpoints: latency-svc-l4gx5 [750.118038ms] Jan 1 14:59:56.569: INFO: Created: latency-svc-dzx6x Jan 1 14:59:56.604: INFO: Got endpoints: latency-svc-5q4kh [750.29156ms] Jan 1 14:59:56.618: INFO: Created: latency-svc-mxv4v Jan 1 14:59:56.650: INFO: Got endpoints: latency-svc-gjfvt [748.464977ms] Jan 1 14:59:56.662: INFO: Created: latency-svc-s5x5d Jan 1 14:59:56.701: INFO: Got endpoints: latency-svc-sx2fc [751.471115ms] Jan 1 14:59:56.719: INFO: Created: latency-svc-lzch2 Jan 1 14:59:56.751: INFO: Got endpoints: latency-svc-grm4p [746.721167ms] Jan 1 14:59:56.809: INFO: Created: latency-svc-c4957 Jan 1 14:59:56.810: INFO: Got endpoints: latency-svc-fq8hv [755.845481ms] Jan 1 14:59:56.823: INFO: Created: latency-svc-4b5z7 Jan 1 14:59:56.852: INFO: Got endpoints: latency-svc-tjcvc [749.691821ms] Jan 1 14:59:56.902: INFO: Got endpoints: latency-svc-5pp78 [750.215831ms] Jan 1 14:59:56.950: INFO: Got endpoints: latency-svc-ncfrj [747.362098ms] Jan 1 14:59:57.005: INFO: Got endpoints: latency-svc-hsmpw [754.460243ms] Jan 1 14:59:57.053: INFO: Got endpoints: latency-svc-ssdq5 [748.89746ms] Jan 1 14:59:57.101: INFO: Got endpoints: latency-svc-p2pdt [745.435826ms] Jan 1 14:59:57.150: INFO: Got endpoints: latency-svc-c2kxf [749.986242ms] Jan 1 14:59:57.202: INFO: Got endpoints: latency-svc-m5chm [752.325645ms] Jan 1 14:59:57.252: INFO: Got endpoints: latency-svc-cpt4x [749.868588ms] Jan 1 14:59:57.300: INFO: Got endpoints: latency-svc-dzx6x [749.364175ms] Jan 1 14:59:57.354: INFO: Got endpoints: latency-svc-mxv4v [749.612148ms] Jan 1 14:59:57.401: INFO: Got endpoints: latency-svc-s5x5d [750.877055ms] Jan 1 14:59:57.454: INFO: Got endpoints: latency-svc-lzch2 [752.617521ms] Jan 1 14:59:57.500: INFO: Got endpoints: latency-svc-c4957 [748.872525ms] Jan 1 14:59:57.550: INFO: Got endpoints: latency-svc-4b5z7 [739.9595ms] Jan 1 14:59:57.550: INFO: Latencies: [31.449448ms 37.089835ms 55.44679ms 77.320213ms 94.221615ms 112.934054ms 121.632709ms 139.169403ms 141.263771ms 147.098046ms 157.348249ms 157.805465ms 159.591222ms 162.151246ms 172.126301ms 174.104232ms 174.485282ms 174.874887ms 185.141182ms 185.674782ms 195.782835ms 196.173891ms 196.555375ms 200.984429ms 202.172226ms 205.793969ms 210.292903ms 211.460015ms 214.306757ms 216.951675ms 218.146796ms 219.690537ms 221.743939ms 225.806176ms 229.414678ms 232.935092ms 237.354727ms 241.701094ms 251.409427ms 272.49652ms 277.718485ms 286.41194ms 334.874021ms 343.145676ms 391.250589ms 445.909235ms 452.073252ms 491.632181ms 495.299162ms 525.389259ms 541.618955ms 590.536309ms 616.04926ms 637.178022ms 685.893175ms 737.903295ms 739.9595ms 740.337179ms 740.916702ms 741.798195ms 742.354317ms 742.57344ms 743.915539ms 744.16706ms 744.960763ms 745.142841ms 745.184481ms 745.249942ms 745.435377ms 745.435826ms 746.067052ms 746.106087ms 746.197567ms 746.409819ms 746.422556ms 746.47026ms 746.490125ms 746.57341ms 746.676069ms 746.721167ms 746.794288ms 746.857265ms 746.990911ms 747.009629ms 747.219559ms 747.257012ms 747.362098ms 747.402559ms 747.505721ms 747.883647ms 747.887758ms 747.898722ms 748.021235ms 748.185658ms 748.228868ms 748.286915ms 748.372346ms 748.383329ms 748.461043ms 748.464977ms 748.537448ms 748.872525ms 748.89746ms 748.931175ms 748.985702ms 749.046114ms 749.212976ms 749.2752ms 749.327827ms 749.364175ms 749.449018ms 749.505302ms 749.517561ms 749.540152ms 749.570017ms 749.592423ms 749.596313ms 749.601891ms 749.612148ms 749.627691ms 749.633805ms 749.64459ms 749.647382ms 749.673004ms 749.691821ms 749.740779ms 749.823453ms 749.868588ms 749.908459ms 749.940151ms 749.956633ms 749.986242ms 750.053822ms 750.112822ms 750.118038ms 750.20058ms 750.215831ms 750.24747ms 750.29156ms 750.332225ms 750.365777ms 750.36847ms 750.444455ms 750.449192ms 750.508268ms 750.570371ms 750.609271ms 750.852544ms 750.867707ms 750.877055ms 750.889263ms 750.965388ms 751.098865ms 751.448609ms 751.471115ms 751.509255ms 751.512441ms 751.514232ms 751.615554ms 751.628093ms 751.804724ms 751.904248ms 751.922668ms 751.93696ms 751.94103ms 752.054845ms 752.062717ms 752.117118ms 752.143264ms 752.152189ms 752.233524ms 752.238886ms 752.26961ms 752.270523ms 752.316188ms 752.325645ms 752.483988ms 752.497368ms 752.596699ms 752.617521ms 752.629107ms 752.967604ms 753.226996ms 753.250981ms 753.262447ms 753.338154ms 753.356815ms 753.545455ms 753.579347ms 754.251829ms 754.460243ms 754.54999ms 754.571718ms 754.597267ms 754.917938ms 755.845481ms 756.711084ms 757.887649ms 757.893642ms 760.968205ms] Jan 1 14:59:57.550: INFO: 50 %ile: 748.537448ms Jan 1 14:59:57.550: INFO: 90 %ile: 752.629107ms Jan 1 14:59:57.550: INFO: 99 %ile: 757.893642ms Jan 1 14:59:57.550: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 14:59:57.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-8168" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":29,"skipped":491,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:57.572: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-6399 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-6399 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-6399 I0101 14:59:57.610035 19 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-6399, replica count: 3 I0101 15:00:00.661429 19 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 15:00:00.667: INFO: Creating new exec pod Jan 1 15:00:03.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6399 exec execpod-affinity528dq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 1 15:00:03.919: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:00:03.919: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 1 15:00:03.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6399 exec execpod-affinity528dq -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.136.177.202 80' Jan 1 15:00:04.116: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.136.177.202 80\nConnection to 10.136.177.202 80 port [tcp/http] succeeded!\n" Jan 1 15:00:04.116: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 1 15:00:04.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6399 exec execpod-affinity528dq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.177.202:80/ ; done' Jan 1 15:00:04.422: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n" Jan 1 15:00:04.422: INFO: stdout: "\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-q52vd" Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:04.422: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:34.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6399 exec execpod-affinity528dq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.177.202:80/ ; done' Jan 1 15:00:34.655: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n" Jan 1 15:00:34.655: INFO: stdout: "\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-hqmfw\naffinity-clusterip-transition-q52vd\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-hqmfw" Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-q52vd Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.655: INFO: Received response from host: affinity-clusterip-transition-hqmfw Jan 1 15:00:34.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6399 exec execpod-affinity528dq -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.136.177.202:80/ ; done' Jan 1 15:00:34.984: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.136.177.202:80/\n" Jan 1 15:00:34.985: INFO: stdout: "\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j\naffinity-clusterip-transition-bs99j" Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Received response from host: affinity-clusterip-transition-bs99j Jan 1 15:00:34.985: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-6399, will wait for the garbage collector to delete the pods Jan 1 15:00:35.057: INFO: Deleting ReplicationController affinity-clusterip-transition took: 5.097922ms Jan 1 15:00:35.158: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.510704ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:00:36.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6399" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":30,"skipped":497,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:00:36.881: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:00:38.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-9269" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":31,"skipped":498,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:00:38.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment Jan 1 15:00:38.978: INFO: Creating simple deployment test-deployment-m5lpv Jan 1 15:00:38.987: INFO: deployment "test-deployment-m5lpv" doesn't have the required revision set �[1mSTEP�[0m: Getting /status Jan 1 15:00:41.005: INFO: Deployment test-deployment-m5lpv has Conditions: [{Available True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m5lpv-764bc7c4b7" has successfully progressed.}] �[1mSTEP�[0m: updating Deployment Status Jan 1 15:00:41.012: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 15, 0, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 15, 0, 39, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 15, 0, 39, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 15, 0, 38, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-m5lpv-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Deployment status to be updated Jan 1 15:00:41.016: INFO: Observed &Deployment event: ADDED Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m5lpv-764bc7c4b7"} Jan 1 15:00:41.016: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m5lpv-764bc7c4b7"} Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 1 15:00:41.016: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-m5lpv-764bc7c4b7" is progressing.} Jan 1 15:00:41.016: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m5lpv-764bc7c4b7" has successfully progressed.} Jan 1 15:00:41.016: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 1 15:00:41.016: INFO: Observed Deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m5lpv-764bc7c4b7" has successfully progressed.} Jan 1 15:00:41.016: INFO: Found Deployment test-deployment-m5lpv in namespace deployment-3301 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 1 15:00:41.016: INFO: Deployment test-deployment-m5lpv has an updated status �[1mSTEP�[0m: patching the Statefulset Status Jan 1 15:00:41.016: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Jan 1 15:00:41.022: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Deployment status to be patched Jan 1 15:00:41.024: INFO: Observed &Deployment event: ADDED Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m5lpv-764bc7c4b7"} Jan 1 15:00:41.024: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-m5lpv-764bc7c4b7"} Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 1 15:00:41.024: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-01 15:00:38 +0000 UTC 2023-01-01 15:00:38 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:38 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-m5lpv-764bc7c4b7" is progressing.} Jan 1 15:00:41.024: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m5lpv-764bc7c4b7" has successfully progressed.} Jan 1 15:00:41.024: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:39 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-01 15:00:39 +0000 UTC 2023-01-01 15:00:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-m5lpv-764bc7c4b7" has successfully progressed.} Jan 1 15:00:41.024: INFO: Observed deployment test-deployment-m5lpv in namespace deployment-3301 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 1 15:00:41.024: INFO: Observed &Deployment event: MODIFIED Jan 1 15:00:41.024: INFO: Found deployment test-deployment-m5lpv in namespace deployment-3301 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Jan 1 15:00:41.024: INFO: Deployment test-deployment-m5lpv has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 1 15:00:41.028: INFO: Deployment "test-deployment-m5lpv": &Deployment{ObjectMeta:{test-deployment-m5lpv deployment-3301 86298875-6a4f-45c2-9cb6-a8e5c60c761f 10886 1 2023-01-01 15:00:38 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-01-01 15:00:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 15:00:39 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-01-01 15:00:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00309f288 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 1 15:00:41.033: INFO: New ReplicaSet "test-deployment-m5lpv-764bc7c4b7" of Deployment "test-deployment-m5lpv": &ReplicaSet{ObjectMeta:{test-deployment-m5lpv-764bc7c4b7 deployment-3301 e85f905a-24b2-42a8-85c8-1b658efd181f 10882 1 2023-01-01 15:00:38 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-m5lpv 86298875-6a4f-45c2-9cb6-a8e5c60c761f 0xc002d00200 0xc002d00201}] [] [{kube-controller-manager Update apps/v1 2023-01-01 15:00:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"86298875-6a4f-45c2-9cb6-a8e5c60c761f\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-01 15:00:39 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d002a8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 1 15:00:41.037: INFO: Pod "test-deployment-m5lpv-764bc7c4b7-8wb5v" is available: &Pod{ObjectMeta:{test-deployment-m5lpv-764bc7c4b7-8wb5v test-deployment-m5lpv-764bc7c4b7- deployment-3301 c6f561b4-ad8a-4afe-8003-bd8d511c24cf 10881 0 2023-01-01 15:00:38 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-m5lpv-764bc7c4b7 e85f905a-24b2-42a8-85c8-1b658efd181f 0xc002d00640 0xc002d00641}] [] [{kube-controller-manager Update v1 2023-01-01 15:00:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e85f905a-24b2-42a8-85c8-1b658efd181f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-01 15:00:39 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.77\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5jfm5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jfm5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-upqhfa-worker-9emfga,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 15:00:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 15:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 15:00:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-01 15:00:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.77,StartTime:2023-01-01 15:00:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-01 15:00:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://d8aa14151a016c4843face133e02042258e60beb37077cd304b9565983251f58,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:00:41.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3301" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":32,"skipped":505,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:00:41.083: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 1 15:00:45.137: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:00:45.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-5759" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":528,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:00:45.166: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 15:00:45.198: INFO: Got root ca configmap in namespace "svcaccounts-9597" Jan 1 15:00:45.203: INFO: Deleted root ca configmap in namespace "svcaccounts-9597" �[1mSTEP�[0m: waiting for a new root ca configmap created Jan 1 15:00:45.707: INFO: Recreated root ca configmap in namespace "svcaccounts-9597" Jan 1 15:00:45.711: INFO: Updated root ca configmap in namespace "svcaccounts-9597" �[1mSTEP�[0m: waiting for the root ca configmap reconciled Jan 1 15:00:46.215: INFO: Reconciled root ca configmap in namespace "svcaccounts-9597" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:00:46.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-9597" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":34,"skipped":534,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:00:46.263: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 1 15:00:46.298: INFO: Waiting up to 5m0s for pod "downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b" in namespace "downward-api-8457" to be "Succeeded or Failed" Jan 1 15:00:46.301: INFO: Pod "downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.636619ms Jan 1 15:00:48.305: INFO: Pod "downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007345435s Jan 1 15:00:50.310: INFO: Pod "downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011931513s �[1mSTEP�[0m: Saw pod success Jan 1 15:00:50.310: INFO: Pod "downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b" satisfied condition "Succeeded or Failed" Jan 1 15:00:50.313: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:00:50.331: INFO: Waiting for pod downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b to disappear Jan 1 15:00:50.338: INFO: Pod downward-api-b2c9c673-bdd1-4e00-bd54-3b69b6cea61b no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:00:50.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8457" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":557,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:00:50.360: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-c109f17c-024b-4b16-894d-36d6c82b4829 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-8d253f88-a4f3-44b0-ba5e-327d420c6fcb �[1mSTEP�[0m: Creating the pod Jan 1 15:00:50.405: INFO: The status of Pod pod-projected-configmaps-2f346c10-d8ac-44e2-b661-1fca2b6a13ef is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:52.410: INFO: The status of Pod pod-projected-configmaps-2f346c10-d8ac-44e2-b661-1fca2b6a13ef is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-c109f17c-024b-4b16-894d-36d6c82b4829 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-8d253f88-a4f3-44b0-ba5e-327d420c6fcb �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-335eb4e6-8c91-48fd-92c5-07998bd4c123 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:00:56.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7740" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":564,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:00:56.515: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jan 1 15:00:56.536: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Jan 1 15:00:57.424: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Jan 1 15:00:59.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 1, 15, 0, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 15, 0, 57, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 1, 15, 0, 57, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 1, 15, 0, 57, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 1 15:01:01.608: INFO: Waited 121.897622ms for the sample-apiserver to be ready to handle requests. �[1mSTEP�[0m: Read Status for v1alpha1.wardle.example.com �[1mSTEP�[0m: kubectl patch apiservice v1alpha1.wardle.example.com -p '{"spec":{"versionPriority": 400}}' �[1mSTEP�[0m: List APIServices Jan 1 15:01:01.691: INFO: Found v1alpha1.wardle.example.com in APIServiceList [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:01:02.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-1222" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":37,"skipped":584,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":8,"skipped":218,"failed":1,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:58:42.988: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-6841 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-6841 to expose endpoints map[] Jan 1 14:58:43.067: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found Jan 1 14:58:44.083: INFO: successfully validated that service multi-endpoint-test in namespace services-6841 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-6841 Jan 1 14:58:44.098: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:58:46.136: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:58:48.114: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:58:50.113: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:58:52.102: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:58:54.115: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:58:56.108: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:58:58.106: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:00.101: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:02.145: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-6841 to expose endpoints map[pod1:[100]] Jan 1 14:59:02.304: INFO: successfully validated that service multi-endpoint-test in namespace services-6841 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-6841 Jan 1 14:59:02.369: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:04.373: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:06.373: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:08.377: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:10.374: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-6841 to expose endpoints map[pod1:[100] pod2:[101]] Jan 1 14:59:10.389: INFO: successfully validated that service multi-endpoint-test in namespace services-6841 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Jan 1 14:59:10.389: INFO: Creating new exec pod Jan 1 14:59:13.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:15.609: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\n+ echo hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:15.609: INFO: stdout: "" Jan 1 14:59:16.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:18.774: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:18.774: INFO: stdout: "" Jan 1 14:59:19.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:21.770: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:21.770: INFO: stdout: "" Jan 1 14:59:22.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:24.760: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:24.760: INFO: stdout: "" Jan 1 14:59:25.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:27.786: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:27.786: INFO: stdout: "" Jan 1 14:59:28.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:30.752: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:30.752: INFO: stdout: "" Jan 1 14:59:31.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:33.755: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:33.755: INFO: stdout: "" Jan 1 14:59:34.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:36.769: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\n+ echo hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:36.769: INFO: stdout: "" Jan 1 14:59:37.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:39.783: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:39.783: INFO: stdout: "" Jan 1 14:59:40.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:42.772: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:42.772: INFO: stdout: "" Jan 1 14:59:43.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:45.774: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:45.774: INFO: stdout: "" Jan 1 14:59:46.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:48.881: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:48.881: INFO: stdout: "" Jan 1 14:59:49.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:51.818: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:51.818: INFO: stdout: "" Jan 1 14:59:52.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:54.783: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:54.783: INFO: stdout: "" Jan 1 14:59:55.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 14:59:57.782: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 14:59:57.783: INFO: stdout: "" Jan 1 14:59:58.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:00.757: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:00.757: INFO: stdout: "" Jan 1 15:00:01.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:03.774: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:03.774: INFO: stdout: "" Jan 1 15:00:04.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:06.835: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:06.835: INFO: stdout: "" Jan 1 15:00:07.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:09.800: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test+ 80echo\n hostName\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:09.800: INFO: stdout: "" Jan 1 15:00:10.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:12.759: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:12.759: INFO: stdout: "" Jan 1 15:00:13.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:15.759: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:15.759: INFO: stdout: "" Jan 1 15:00:16.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:18.764: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:18.764: INFO: stdout: "" Jan 1 15:00:19.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:21.766: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:21.766: INFO: stdout: "" Jan 1 15:00:22.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:24.731: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:24.731: INFO: stdout: "" Jan 1 15:00:25.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:27.746: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:27.746: INFO: stdout: "" Jan 1 15:00:28.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:30.772: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:30.772: INFO: stdout: "" Jan 1 15:00:31.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:33.755: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:33.755: INFO: stdout: "" Jan 1 15:00:34.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:36.803: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:36.803: INFO: stdout: "" Jan 1 15:00:37.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:39.757: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:39.757: INFO: stdout: "" Jan 1 15:00:40.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:42.770: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:42.770: INFO: stdout: "" Jan 1 15:00:43.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:45.764: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:45.764: INFO: stdout: "" Jan 1 15:00:46.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:48.777: INFO: stderr: "+ + echo hostName\nnc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:48.777: INFO: stdout: "" Jan 1 15:00:49.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:51.777: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:51.777: INFO: stdout: "" Jan 1 15:00:52.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:54.769: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:54.769: INFO: stdout: "" Jan 1 15:00:55.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:00:57.755: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:00:57.755: INFO: stdout: "" Jan 1 15:00:58.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:00.788: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:00.788: INFO: stdout: "" Jan 1 15:01:01.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:03.826: INFO: stderr: "+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n+ echo hostName\n" Jan 1 15:01:03.826: INFO: stdout: "" Jan 1 15:01:04.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:06.761: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:06.761: INFO: stdout: "" Jan 1 15:01:07.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:09.762: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:09.762: INFO: stdout: "" Jan 1 15:01:10.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:12.768: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:12.768: INFO: stdout: "" Jan 1 15:01:13.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:15.763: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:15.763: INFO: stdout: "" Jan 1 15:01:15.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6841 exec execpodfhjxc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:17.919: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:17.919: INFO: stdout: "" Jan 1 15:01:17.920: FAIL: Unexpected error: <*errors.errorString | 0xc004f6c3b0>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 +0x7c6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006036c0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:01:17.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6841" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [155.015 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould serve multiport endpoints from pods [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 15:01:17.920: Unexpected error: <*errors.errorString | 0xc004f6c3b0>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:57:29.678: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:02:29.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-9925" for this suite. �[32m• [SLOW TEST:300.065 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":16,"skipped":289,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:02:29.775: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting an echo server on multiple ports �[1mSTEP�[0m: creating replication controller proxy-service-rk2gv in namespace proxy-5624 I0101 15:02:29.850672 20 runners.go:193] Created replication controller with name: proxy-service-rk2gv, namespace: proxy-5624, replica count: 1 I0101 15:02:30.901610 20 runners.go:193] proxy-service-rk2gv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0101 15:02:31.901744 20 runners.go:193] proxy-service-rk2gv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0101 15:02:32.902292 20 runners.go:193] proxy-service-rk2gv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 15:02:32.906: INFO: setup took 3.105958315s, starting test cases �[1mSTEP�[0m: running 16 cases, 20 attempts per case, 320 total attempts Jan 1 15:02:32.912: INFO: (0) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.064431ms) Jan 1 15:02:32.912: INFO: (0) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 5.935398ms) Jan 1 15:02:32.912: INFO: (0) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.506542ms) Jan 1 15:02:32.912: INFO: (0) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 6.341903ms) Jan 1 15:02:32.912: INFO: (0) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 6.365511ms) Jan 1 15:02:32.913: INFO: (0) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.75138ms) Jan 1 15:02:32.913: INFO: (0) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.840112ms) Jan 1 15:02:32.921: INFO: (0) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 15.232407ms) Jan 1 15:02:32.922: INFO: (0) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 16.299041ms) Jan 1 15:02:32.922: INFO: (0) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 16.292726ms) Jan 1 15:02:32.922: INFO: (0) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 16.440518ms) Jan 1 15:02:32.922: INFO: (0) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 16.443854ms) Jan 1 15:02:32.923: INFO: (0) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 17.707395ms) Jan 1 15:02:32.923: INFO: (0) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 17.566521ms) Jan 1 15:02:32.924: INFO: (0) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 18.559719ms) Jan 1 15:02:32.928: INFO: (0) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 21.804824ms) Jan 1 15:02:32.934: INFO: (1) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 5.814407ms) Jan 1 15:02:32.934: INFO: (1) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 5.850448ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.000652ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 7.983114ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 8.151703ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.054306ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 8.129258ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 8.125778ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.111589ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.051357ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 8.241525ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 8.106185ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 8.115443ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.150053ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 8.162609ms) Jan 1 15:02:32.936: INFO: (1) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 8.299889ms) Jan 1 15:02:32.940: INFO: (2) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 3.914843ms) Jan 1 15:02:32.943: INFO: (2) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 6.723292ms) Jan 1 15:02:32.944: INFO: (2) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.809074ms) Jan 1 15:02:32.944: INFO: (2) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.028101ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.305422ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 8.075219ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.604074ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 8.740475ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.432494ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 8.917378ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 8.357346ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.624345ms) Jan 1 15:02:32.945: INFO: (2) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 8.347677ms) Jan 1 15:02:32.946: INFO: (2) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 8.74462ms) Jan 1 15:02:32.946: INFO: (2) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 9.028375ms) Jan 1 15:02:32.946: INFO: (2) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 9.169581ms) Jan 1 15:02:32.954: INFO: (3) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 7.390167ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.851875ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.93879ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 9.00634ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 9.136109ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 9.273672ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 9.113675ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 9.177162ms) Jan 1 15:02:32.955: INFO: (3) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 9.318538ms) Jan 1 15:02:32.956: INFO: (3) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 9.427395ms) Jan 1 15:02:32.956: INFO: (3) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 9.398182ms) Jan 1 15:02:32.956: INFO: (3) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 9.539179ms) Jan 1 15:02:32.956: INFO: (3) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 9.583903ms) Jan 1 15:02:32.956: INFO: (3) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 9.477194ms) Jan 1 15:02:32.956: INFO: (3) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 9.610863ms) Jan 1 15:02:32.956: INFO: (3) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 9.613019ms) Jan 1 15:02:32.960: INFO: (4) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 4.138277ms) Jan 1 15:02:32.960: INFO: (4) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 3.99006ms) Jan 1 15:02:32.960: INFO: (4) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 4.076997ms) Jan 1 15:02:32.963: INFO: (4) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 6.489518ms) Jan 1 15:02:32.963: INFO: (4) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 6.795413ms) Jan 1 15:02:32.964: INFO: (4) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 7.569882ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 9.537733ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 9.645542ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 10.007902ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 9.49871ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 9.873634ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 9.933974ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 9.719638ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 9.91986ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 10.059506ms) Jan 1 15:02:32.966: INFO: (4) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 9.555378ms) Jan 1 15:02:32.973: INFO: (5) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.103005ms) Jan 1 15:02:32.973: INFO: (5) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 5.969286ms) Jan 1 15:02:32.973: INFO: (5) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.067311ms) Jan 1 15:02:32.973: INFO: (5) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 6.06459ms) Jan 1 15:02:32.973: INFO: (5) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 6.280053ms) Jan 1 15:02:32.974: INFO: (5) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 6.736838ms) Jan 1 15:02:32.974: INFO: (5) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 7.457509ms) Jan 1 15:02:32.974: INFO: (5) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 7.401566ms) Jan 1 15:02:32.974: INFO: (5) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 7.461075ms) Jan 1 15:02:32.975: INFO: (5) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.065733ms) Jan 1 15:02:32.975: INFO: (5) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 8.067666ms) Jan 1 15:02:32.975: INFO: (5) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 8.233731ms) Jan 1 15:02:32.975: INFO: (5) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 8.423916ms) Jan 1 15:02:32.975: INFO: (5) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.304638ms) Jan 1 15:02:32.975: INFO: (5) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 8.854772ms) Jan 1 15:02:32.976: INFO: (5) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 9.214277ms) Jan 1 15:02:32.984: INFO: (6) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.403245ms) Jan 1 15:02:32.984: INFO: (6) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 7.645433ms) Jan 1 15:02:32.984: INFO: (6) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 7.667777ms) Jan 1 15:02:32.984: INFO: (6) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 7.481023ms) Jan 1 15:02:32.984: INFO: (6) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 7.457608ms) Jan 1 15:02:32.984: INFO: (6) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 7.177008ms) Jan 1 15:02:32.984: INFO: (6) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 7.657966ms) Jan 1 15:02:32.985: INFO: (6) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 7.928289ms) Jan 1 15:02:32.985: INFO: (6) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.689848ms) Jan 1 15:02:32.985: INFO: (6) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 9.011902ms) Jan 1 15:02:32.986: INFO: (6) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 8.771717ms) Jan 1 15:02:32.986: INFO: (6) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 9.224101ms) Jan 1 15:02:32.986: INFO: (6) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 9.393236ms) Jan 1 15:02:32.986: INFO: (6) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 8.937076ms) Jan 1 15:02:32.986: INFO: (6) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 8.913543ms) Jan 1 15:02:32.986: INFO: (6) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 9.00517ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 5.525402ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 5.883169ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.178824ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 5.949398ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 6.211997ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 6.04266ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 6.221988ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 6.114195ms) Jan 1 15:02:32.992: INFO: (7) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 6.524746ms) Jan 1 15:02:32.993: INFO: (7) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.638649ms) Jan 1 15:02:32.994: INFO: (7) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 7.818361ms) Jan 1 15:02:32.994: INFO: (7) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 8.036659ms) Jan 1 15:02:32.994: INFO: (7) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 7.883487ms) Jan 1 15:02:32.994: INFO: (7) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 7.949524ms) Jan 1 15:02:32.994: INFO: (7) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 7.883562ms) Jan 1 15:02:32.994: INFO: (7) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.39935ms) Jan 1 15:02:33.002: INFO: (8) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.683631ms) Jan 1 15:02:33.003: INFO: (8) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 8.310415ms) Jan 1 15:02:33.003: INFO: (8) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 8.623898ms) Jan 1 15:02:33.003: INFO: (8) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 9.049733ms) Jan 1 15:02:33.003: INFO: (8) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.979793ms) Jan 1 15:02:33.004: INFO: (8) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 9.628004ms) Jan 1 15:02:33.004: INFO: (8) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 10.133426ms) Jan 1 15:02:33.004: INFO: (8) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 10.018829ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 10.25913ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 10.25371ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 10.226428ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 10.275295ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 10.358893ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 10.389794ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 10.483361ms) Jan 1 15:02:33.005: INFO: (8) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 10.528906ms) Jan 1 15:02:33.010: INFO: (9) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 4.666087ms) Jan 1 15:02:33.010: INFO: (9) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 4.840068ms) Jan 1 15:02:33.010: INFO: (9) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 4.875545ms) Jan 1 15:02:33.010: INFO: (9) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 5.146917ms) Jan 1 15:02:33.011: INFO: (9) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 5.924023ms) Jan 1 15:02:33.012: INFO: (9) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 5.996189ms) Jan 1 15:02:33.012: INFO: (9) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.501501ms) Jan 1 15:02:33.012: INFO: (9) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.517302ms) Jan 1 15:02:33.012: INFO: (9) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 6.847323ms) Jan 1 15:02:33.013: INFO: (9) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.166054ms) Jan 1 15:02:33.013: INFO: (9) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 7.78267ms) Jan 1 15:02:33.013: INFO: (9) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 7.83751ms) Jan 1 15:02:33.013: INFO: (9) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 7.864524ms) Jan 1 15:02:33.013: INFO: (9) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.961556ms) Jan 1 15:02:33.013: INFO: (9) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 8.067095ms) Jan 1 15:02:33.013: INFO: (9) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.038131ms) Jan 1 15:02:33.020: INFO: (10) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 5.858352ms) Jan 1 15:02:33.020: INFO: (10) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 5.979669ms) Jan 1 15:02:33.020: INFO: (10) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 5.699204ms) Jan 1 15:02:33.020: INFO: (10) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 5.933969ms) Jan 1 15:02:33.020: INFO: (10) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.613082ms) Jan 1 15:02:33.021: INFO: (10) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 6.711959ms) Jan 1 15:02:33.021: INFO: (10) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 6.510691ms) Jan 1 15:02:33.021: INFO: (10) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.753822ms) Jan 1 15:02:33.021: INFO: (10) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 7.839629ms) Jan 1 15:02:33.022: INFO: (10) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 8.229111ms) Jan 1 15:02:33.023: INFO: (10) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 8.073356ms) Jan 1 15:02:33.023: INFO: (10) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 8.335693ms) Jan 1 15:02:33.023: INFO: (10) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.836012ms) Jan 1 15:02:33.023: INFO: (10) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 8.197845ms) Jan 1 15:02:33.023: INFO: (10) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 9.074875ms) Jan 1 15:02:33.023: INFO: (10) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 8.948558ms) Jan 1 15:02:33.027: INFO: (11) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 3.159968ms) Jan 1 15:02:33.027: INFO: (11) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 3.459686ms) Jan 1 15:02:33.029: INFO: (11) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 5.670582ms) Jan 1 15:02:33.030: INFO: (11) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 6.739775ms) Jan 1 15:02:33.030: INFO: (11) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 6.81868ms) Jan 1 15:02:33.030: INFO: (11) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.940521ms) Jan 1 15:02:33.030: INFO: (11) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 6.914815ms) Jan 1 15:02:33.030: INFO: (11) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 6.852563ms) Jan 1 15:02:33.030: INFO: (11) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 6.91183ms) Jan 1 15:02:33.030: INFO: (11) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 7.019307ms) Jan 1 15:02:33.031: INFO: (11) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.232818ms) Jan 1 15:02:33.031: INFO: (11) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 7.215638ms) Jan 1 15:02:33.031: INFO: (11) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 7.425287ms) Jan 1 15:02:33.032: INFO: (11) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 8.877891ms) Jan 1 15:02:33.033: INFO: (11) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 9.683369ms) Jan 1 15:02:33.033: INFO: (11) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 9.890039ms) Jan 1 15:02:33.043: INFO: (12) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 9.596945ms) Jan 1 15:02:33.043: INFO: (12) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 9.657765ms) Jan 1 15:02:33.043: INFO: (12) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 9.881876ms) Jan 1 15:02:33.043: INFO: (12) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 9.956984ms) Jan 1 15:02:33.043: INFO: (12) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 9.990433ms) Jan 1 15:02:33.043: INFO: (12) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 10.015504ms) Jan 1 15:02:33.044: INFO: (12) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 10.010921ms) Jan 1 15:02:33.044: INFO: (12) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 10.039642ms) Jan 1 15:02:33.044: INFO: (12) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 10.108587ms) Jan 1 15:02:33.044: INFO: (12) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 10.05227ms) Jan 1 15:02:33.046: INFO: (12) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 12.734984ms) Jan 1 15:02:33.047: INFO: (12) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 13.064608ms) Jan 1 15:02:33.047: INFO: (12) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 13.174934ms) Jan 1 15:02:33.047: INFO: (12) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 13.150621ms) Jan 1 15:02:33.047: INFO: (12) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 13.61473ms) Jan 1 15:02:33.047: INFO: (12) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 13.878026ms) Jan 1 15:02:33.057: INFO: (13) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 9.131333ms) Jan 1 15:02:33.057: INFO: (13) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 9.377483ms) Jan 1 15:02:33.058: INFO: (13) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 10.518885ms) Jan 1 15:02:33.059: INFO: (13) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 12.010158ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 12.293995ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 12.405815ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 12.274732ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 12.317558ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 12.489252ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 12.320643ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 12.348642ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 12.546102ms) Jan 1 15:02:33.060: INFO: (13) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 13.023756ms) Jan 1 15:02:33.062: INFO: (13) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 14.446241ms) Jan 1 15:02:33.062: INFO: (13) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 14.678411ms) Jan 1 15:02:33.062: INFO: (13) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 14.823592ms) Jan 1 15:02:33.069: INFO: (14) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.874499ms) Jan 1 15:02:33.069: INFO: (14) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 6.918885ms) Jan 1 15:02:33.069: INFO: (14) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 6.943031ms) Jan 1 15:02:33.070: INFO: (14) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.327814ms) Jan 1 15:02:33.070: INFO: (14) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 7.204947ms) Jan 1 15:02:33.070: INFO: (14) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 7.751864ms) Jan 1 15:02:33.070: INFO: (14) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.633367ms) Jan 1 15:02:33.071: INFO: (14) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 8.336944ms) Jan 1 15:02:33.071: INFO: (14) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.199192ms) Jan 1 15:02:33.074: INFO: (14) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 11.043872ms) Jan 1 15:02:33.074: INFO: (14) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 11.815947ms) Jan 1 15:02:33.074: INFO: (14) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 11.850752ms) Jan 1 15:02:33.074: INFO: (14) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 12.086472ms) Jan 1 15:02:33.075: INFO: (14) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 12.284589ms) Jan 1 15:02:33.075: INFO: (14) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 12.301993ms) Jan 1 15:02:33.075: INFO: (14) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 12.103217ms) Jan 1 15:02:33.083: INFO: (15) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.295821ms) Jan 1 15:02:33.083: INFO: (15) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.370571ms) Jan 1 15:02:33.083: INFO: (15) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 8.568697ms) Jan 1 15:02:33.086: INFO: (15) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 11.069039ms) Jan 1 15:02:33.086: INFO: (15) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 11.145701ms) Jan 1 15:02:33.086: INFO: (15) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 11.178342ms) Jan 1 15:02:33.087: INFO: (15) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 11.563103ms) Jan 1 15:02:33.087: INFO: (15) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 11.792564ms) Jan 1 15:02:33.088: INFO: (15) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 12.61296ms) Jan 1 15:02:33.088: INFO: (15) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 12.696634ms) Jan 1 15:02:33.088: INFO: (15) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 12.618658ms) Jan 1 15:02:33.089: INFO: (15) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 14.296541ms) Jan 1 15:02:33.089: INFO: (15) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 14.225442ms) Jan 1 15:02:33.090: INFO: (15) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 14.930108ms) Jan 1 15:02:33.090: INFO: (15) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 14.84862ms) Jan 1 15:02:33.091: INFO: (15) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 15.487496ms) Jan 1 15:02:33.098: INFO: (16) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 7.051736ms) Jan 1 15:02:33.098: INFO: (16) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 7.522499ms) Jan 1 15:02:33.098: INFO: (16) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 7.602937ms) Jan 1 15:02:33.099: INFO: (16) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 7.898077ms) Jan 1 15:02:33.099: INFO: (16) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.255285ms) Jan 1 15:02:33.099: INFO: (16) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.341465ms) Jan 1 15:02:33.099: INFO: (16) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 8.109012ms) Jan 1 15:02:33.099: INFO: (16) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 8.251254ms) Jan 1 15:02:33.099: INFO: (16) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.30925ms) Jan 1 15:02:33.099: INFO: (16) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 8.221569ms) Jan 1 15:02:33.100: INFO: (16) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 9.617077ms) Jan 1 15:02:33.101: INFO: (16) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 9.626514ms) Jan 1 15:02:33.101: INFO: (16) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 9.883591ms) Jan 1 15:02:33.101: INFO: (16) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 9.921503ms) Jan 1 15:02:33.101: INFO: (16) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 10.088002ms) Jan 1 15:02:33.101: INFO: (16) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 9.834034ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 5.516036ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 5.994235ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 5.963786ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 6.02573ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 6.001175ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 6.114807ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 6.076473ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.203506ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 6.21643ms) Jan 1 15:02:33.107: INFO: (17) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.291136ms) Jan 1 15:02:33.108: INFO: (17) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 7.0416ms) Jan 1 15:02:33.109: INFO: (17) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 7.987189ms) Jan 1 15:02:33.109: INFO: (17) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.022912ms) Jan 1 15:02:33.109: INFO: (17) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 8.26135ms) Jan 1 15:02:33.110: INFO: (17) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 8.407477ms) Jan 1 15:02:33.110: INFO: (17) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 8.690764ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 8.656534ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.640424ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.7469ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 8.81883ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 8.955215ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 9.250524ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 9.028674ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 8.846892ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 8.946754ms) Jan 1 15:02:33.119: INFO: (18) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 8.780504ms) Jan 1 15:02:33.121: INFO: (18) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 10.860911ms) Jan 1 15:02:33.121: INFO: (18) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 10.352059ms) Jan 1 15:02:33.121: INFO: (18) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 10.46487ms) Jan 1 15:02:33.121: INFO: (18) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 10.5938ms) Jan 1 15:02:33.121: INFO: (18) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 10.707385ms) Jan 1 15:02:33.121: INFO: (18) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 11.144128ms) Jan 1 15:02:33.127: INFO: (19) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">test<... (200; 5.086568ms) Jan 1 15:02:33.127: INFO: (19) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8/proxy/rewriteme">test</a> (200; 5.229776ms) Jan 1 15:02:33.128: INFO: (19) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.873095ms) Jan 1 15:02:33.128: INFO: (19) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:460/proxy/: tls baz (200; 7.095044ms) Jan 1 15:02:33.128: INFO: (19) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:160/proxy/: foo (200; 6.820244ms) Jan 1 15:02:33.128: INFO: (19) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:1080/proxy/rewriteme">... (200; 6.970869ms) Jan 1 15:02:33.128: INFO: (19) /api/v1/namespaces/proxy-5624/pods/proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.95593ms) Jan 1 15:02:33.129: INFO: (19) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:462/proxy/: tls qux (200; 7.090848ms) Jan 1 15:02:33.129: INFO: (19) /api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/: <a href="/api/v1/namespaces/proxy-5624/pods/https:proxy-service-rk2gv-28lc8:443/proxy/tlsrewritem... (200; 7.130668ms) Jan 1 15:02:33.129: INFO: (19) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname2/proxy/: bar (200; 7.237834ms) Jan 1 15:02:33.129: INFO: (19) /api/v1/namespaces/proxy-5624/pods/http:proxy-service-rk2gv-28lc8:162/proxy/: bar (200; 6.997974ms) Jan 1 15:02:33.129: INFO: (19) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname1/proxy/: foo (200; 7.242421ms) Jan 1 15:02:33.130: INFO: (19) /api/v1/namespaces/proxy-5624/services/http:proxy-service-rk2gv:portname1/proxy/: foo (200; 8.329274ms) Jan 1 15:02:33.130: INFO: (19) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname1/proxy/: tls baz (200; 8.849925ms) Jan 1 15:02:33.131: INFO: (19) /api/v1/namespaces/proxy-5624/services/proxy-service-rk2gv:portname2/proxy/: bar (200; 9.036591ms) Jan 1 15:02:33.131: INFO: (19) /api/v1/namespaces/proxy-5624/services/https:proxy-service-rk2gv:tlsportname2/proxy/: tls qux (200; 9.536325ms) �[1mSTEP�[0m: deleting ReplicationController proxy-service-rk2gv in namespace proxy-5624, will wait for the garbage collector to delete the pods Jan 1 15:02:33.191: INFO: Deleting ReplicationController proxy-service-rk2gv took: 6.061253ms Jan 1 15:02:33.292: INFO: Terminating ReplicationController proxy-service-rk2gv pods took: 100.94218ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:02:34.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-5624" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":17,"skipped":314,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:02:35.050: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Discovering how many secrets are in namespace by default �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Secret �[1mSTEP�[0m: Ensuring resource quota status captures secret creation �[1mSTEP�[0m: Deleting a secret �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:02:52.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3936" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":-1,"completed":18,"skipped":337,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:02:52.155: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 15:02:52.176: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:02:58.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-4494" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":19,"skipped":354,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":8,"skipped":218,"failed":2,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:01:18.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-886 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-886 to expose endpoints map[] Jan 1 15:01:18.094: INFO: successfully validated that service multi-endpoint-test in namespace services-886 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-886 Jan 1 15:01:18.120: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:20.125: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-886 to expose endpoints map[pod1:[100]] Jan 1 15:01:20.138: INFO: successfully validated that service multi-endpoint-test in namespace services-886 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-886 Jan 1 15:01:20.148: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:22.153: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-886 to expose endpoints map[pod1:[100] pod2:[101]] Jan 1 15:01:22.167: INFO: successfully validated that service multi-endpoint-test in namespace services-886 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Jan 1 15:01:22.167: INFO: Creating new exec pod Jan 1 15:01:25.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:27.425: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:27.425: INFO: stdout: "" Jan 1 15:01:28.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:30.569: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:30.569: INFO: stdout: "" Jan 1 15:01:31.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:33.580: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:33.580: INFO: stdout: "" Jan 1 15:01:34.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:36.579: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:36.580: INFO: stdout: "" Jan 1 15:01:37.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:39.565: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:39.565: INFO: stdout: "" Jan 1 15:01:40.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:42.581: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:42.581: INFO: stdout: "" Jan 1 15:01:43.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:45.566: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:45.566: INFO: stdout: "" Jan 1 15:01:46.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:48.575: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:48.575: INFO: stdout: "" Jan 1 15:01:49.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:51.576: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:51.576: INFO: stdout: "" Jan 1 15:01:52.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:54.561: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:54.561: INFO: stdout: "" Jan 1 15:01:55.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:01:57.609: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:01:57.609: INFO: stdout: "" Jan 1 15:01:58.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:00.594: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:00.594: INFO: stdout: "" Jan 1 15:02:01.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:03.594: INFO: stderr: "+ nc -v -t+ -w 2 multi-endpoint-testecho 80 hostName\n\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:03.595: INFO: stdout: "" Jan 1 15:02:04.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:06.576: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:06.576: INFO: stdout: "" Jan 1 15:02:07.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:09.594: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:09.594: INFO: stdout: "" Jan 1 15:02:10.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:12.588: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:12.588: INFO: stdout: "" Jan 1 15:02:13.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:15.634: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:15.634: INFO: stdout: "" Jan 1 15:02:16.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:18.592: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:18.592: INFO: stdout: "" Jan 1 15:02:19.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:21.577: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:21.577: INFO: stdout: "" Jan 1 15:02:22.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:24.565: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:24.565: INFO: stdout: "" Jan 1 15:02:25.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:27.576: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:27.576: INFO: stdout: "" Jan 1 15:02:28.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:30.593: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:30.593: INFO: stdout: "" Jan 1 15:02:31.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:33.580: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:33.580: INFO: stdout: "" Jan 1 15:02:34.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:36.581: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:36.581: INFO: stdout: "" Jan 1 15:02:37.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:39.581: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:39.581: INFO: stdout: "" Jan 1 15:02:40.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:42.561: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:42.561: INFO: stdout: "" Jan 1 15:02:43.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:45.580: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:45.581: INFO: stdout: "" Jan 1 15:02:46.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:48.571: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:48.571: INFO: stdout: "" Jan 1 15:02:49.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:51.576: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:51.576: INFO: stdout: "" Jan 1 15:02:52.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:54.561: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:54.561: INFO: stdout: "" Jan 1 15:02:55.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:02:57.572: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:02:57.572: INFO: stdout: "" Jan 1 15:02:58.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:00.565: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:00.565: INFO: stdout: "" Jan 1 15:03:01.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:03.634: INFO: stderr: "+ echo+ nc -v hostName -t\n -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:03.634: INFO: stdout: "" Jan 1 15:03:04.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:06.586: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:06.586: INFO: stdout: "" Jan 1 15:03:07.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:09.600: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:09.601: INFO: stdout: "" Jan 1 15:03:10.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:12.569: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:12.569: INFO: stdout: "" Jan 1 15:03:13.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:15.596: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:15.596: INFO: stdout: "" Jan 1 15:03:16.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:18.578: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:18.578: INFO: stdout: "" Jan 1 15:03:19.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:21.570: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:21.570: INFO: stdout: "" Jan 1 15:03:22.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:24.562: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:24.562: INFO: stdout: "" Jan 1 15:03:25.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:27.561: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:27.561: INFO: stdout: "" Jan 1 15:03:27.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-886 exec execpodk5x4n -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Jan 1 15:03:29.725: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Jan 1 15:03:29.725: INFO: stdout: "" Jan 1 15:03:29.725: FAIL: Unexpected error: <*errors.errorString | 0xc00469a1f0>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 +0x7c6 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006036c0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:03:29.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-886" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [131.822 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould serve multiport endpoints from pods [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 15:03:29.725: Unexpected error: <*errors.errorString | 0xc00469a1f0>: { s: "service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint multi-endpoint-test:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:916 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":8,"skipped":218,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:03:29.921: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: creating a watch on configmaps from the resource version returned by the first update �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap after the first update Jan 1 15:03:29.978: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6396 7ebf8e38-9926-4017-a8e5-4dc9c7b8953c 11787 0 2023-01-01 15:03:29 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-01 15:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 1 15:03:29.978: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6396 7ebf8e38-9926-4017-a8e5-4dc9c7b8953c 11788 0 2023-01-01 15:03:29 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2023-01-01 15:03:29 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:03:29.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-6396" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":-1,"completed":9,"skipped":259,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:02:58.394: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-6083 Jan 1 15:02:58.424: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:03:00.429: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jan 1 15:03:00.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6083 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 1 15:03:00.608: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 1 15:03:00.608: INFO: stdout: "iptables" Jan 1 15:03:00.608: INFO: proxyMode: iptables Jan 1 15:03:00.618: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 1 15:03:00.621: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-clusterip-timeout in namespace services-6083 �[1mSTEP�[0m: creating replication controller affinity-clusterip-timeout in namespace services-6083 I0101 15:03:00.648730 20 runners.go:193] Created replication controller with name: affinity-clusterip-timeout, namespace: services-6083, replica count: 3 I0101 15:03:03.700176 20 runners.go:193] affinity-clusterip-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 15:03:03.706: INFO: Creating new exec pod Jan 1 15:03:06.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6083 exec execpod-affinityzs45k -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-timeout 80' Jan 1 15:03:06.871: INFO: stderr: "+ echo hostName+ nc -v -t -w 2 affinity-clusterip-timeout 80\n\nConnection to affinity-clusterip-timeout 80 port [tcp/http] succeeded!\n" Jan 1 15:03:06.871: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 1 15:03:06.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6083 exec execpod-affinityzs45k -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.255.250 80' Jan 1 15:03:07.028: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.255.250 80\nConnection to 10.131.255.250 80 port [tcp/http] succeeded!\n" Jan 1 15:03:07.028: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 1 15:03:07.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6083 exec execpod-affinityzs45k -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.131.255.250:80/ ; done' Jan 1 15:03:07.257: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n" Jan 1 15:03:07.257: INFO: stdout: "\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j\naffinity-clusterip-timeout-8tf9j" Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Received response from host: affinity-clusterip-timeout-8tf9j Jan 1 15:03:07.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6083 exec execpod-affinityzs45k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.131.255.250:80/' Jan 1 15:03:07.409: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n" Jan 1 15:03:07.409: INFO: stdout: "affinity-clusterip-timeout-8tf9j" Jan 1 15:03:27.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6083 exec execpod-affinityzs45k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.131.255.250:80/' Jan 1 15:03:27.579: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n" Jan 1 15:03:27.579: INFO: stdout: "affinity-clusterip-timeout-8tf9j" Jan 1 15:03:47.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6083 exec execpod-affinityzs45k -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://10.131.255.250:80/' Jan 1 15:03:47.770: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://10.131.255.250:80/\n" Jan 1 15:03:47.770: INFO: stdout: "affinity-clusterip-timeout-9zlpp" Jan 1 15:03:47.770: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-timeout in namespace services-6083, will wait for the garbage collector to delete the pods Jan 1 15:03:47.842: INFO: Deleting ReplicationController affinity-clusterip-timeout took: 6.389538ms Jan 1 15:03:47.944: INFO: Terminating ReplicationController affinity-clusterip-timeout pods took: 101.179604ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:03:49.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6083" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":20,"skipped":379,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:03:30.136: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 15:03:30.157: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating first CR Jan 1 15:03:32.728: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-01T15:03:32Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-01T15:03:32Z]] name:name1 resourceVersion:11806 uid:a2302e70-383b-4dfc-af1b-e0532047f98d] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Creating second CR Jan 1 15:03:42.734: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-01T15:03:42Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-01T15:03:42Z]] name:name2 resourceVersion:11850 uid:d625566c-5891-42bb-bf01-5bb650e1fb5f] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying first CR Jan 1 15:03:52.743: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-01T15:03:32Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-01T15:03:52Z]] name:name1 resourceVersion:11920 uid:a2302e70-383b-4dfc-af1b-e0532047f98d] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying second CR Jan 1 15:04:02.753: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-01T15:03:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-01T15:04:02Z]] name:name2 resourceVersion:11980 uid:d625566c-5891-42bb-bf01-5bb650e1fb5f] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting first CR Jan 1 15:04:12.759: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-01T15:03:32Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-01T15:03:52Z]] name:name1 resourceVersion:11998 uid:a2302e70-383b-4dfc-af1b-e0532047f98d] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting second CR Jan 1 15:04:22.767: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-01T15:03:42Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-01T15:04:02Z]] name:name2 resourceVersion:12022 uid:d625566c-5891-42bb-bf01-5bb650e1fb5f] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:04:33.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-watch-2679" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":10,"skipped":360,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:04:33.318: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 1 15:04:33.359: INFO: Waiting up to 5m0s for pod "security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3" in namespace "security-context-8714" to be "Succeeded or Failed" Jan 1 15:04:33.364: INFO: Pod "security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316538ms Jan 1 15:04:35.368: INFO: Pod "security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008993557s Jan 1 15:04:37.373: INFO: Pod "security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013455121s �[1mSTEP�[0m: Saw pod success Jan 1 15:04:37.373: INFO: Pod "security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3" satisfied condition "Succeeded or Failed" Jan 1 15:04:37.376: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:04:37.399: INFO: Waiting for pod security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3 to disappear Jan 1 15:04:37.402: INFO: Pod security-context-354f087a-2b60-4e1a-972d-8d29ec0c38f3 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:04:37.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-8714" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":11,"skipped":374,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:04:37.423: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1573 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 1 15:04:37.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7619 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 1 15:04:37.527: INFO: stderr: "" Jan 1 15:04:37.527: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Jan 1 15:04:42.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7619 get pod e2e-test-httpd-pod -o json' Jan 1 15:04:42.657: INFO: stderr: "" Jan 1 15:04:42.657: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2023-01-01T15:04:37Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7619\",\n \"resourceVersion\": \"12087\",\n \"uid\": \"668e9e68-a098-4066-915a-ded9e1045a66\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-bxdnh\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-bxdnh\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-01T15:04:37Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-01T15:04:38Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-01T15:04:38Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2023-01-01T15:04:37Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://922d7a384288cae234f6f6e2909f68374a197337719685c7e0e41ce3da930e83\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2023-01-01T15:04:38Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.0.80\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.0.80\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2023-01-01T15:04:37Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Jan 1 15:04:42.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7619 replace -f -' Jan 1 15:04:43.607: INFO: stderr: "" Jan 1 15:04:43.608: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 Jan 1 15:04:43.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7619 delete pods e2e-test-httpd-pod' Jan 1 15:04:45.360: INFO: stderr: "" Jan 1 15:04:45.360: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:04:45.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7619" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":12,"skipped":381,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:04:45.402: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 1 15:04:45.433: INFO: Waiting up to 5m0s for pod "pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a" in namespace "emptydir-3820" to be "Succeeded or Failed" Jan 1 15:04:45.438: INFO: Pod "pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.384107ms Jan 1 15:04:47.443: INFO: Pod "pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009630072s Jan 1 15:04:49.448: INFO: Pod "pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014146311s �[1mSTEP�[0m: Saw pod success Jan 1 15:04:49.448: INFO: Pod "pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a" satisfied condition "Succeeded or Failed" Jan 1 15:04:49.451: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:04:49.466: INFO: Waiting for pod pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a to disappear Jan 1 15:04:49.468: INFO: Pod pod-64ee59e0-1176-4d5f-9953-692bd2f90a2a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:04:49.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3820" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":404,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:04:49.518: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 15:04:49.547: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7" in namespace "projected-5430" to be "Succeeded or Failed" Jan 1 15:04:49.551: INFO: Pod "downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.980112ms Jan 1 15:04:51.554: INFO: Pod "downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7": Phase="Running", Reason="", readiness=false. Elapsed: 2.006435209s Jan 1 15:04:53.560: INFO: Pod "downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011840806s �[1mSTEP�[0m: Saw pod success Jan 1 15:04:53.560: INFO: Pod "downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7" satisfied condition "Succeeded or Failed" Jan 1 15:04:53.563: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:04:53.586: INFO: Waiting for pod downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7 to disappear Jan 1 15:04:53.589: INFO: Pod downwardapi-volume-a78bee24-d6e3-41d6-96f4-5d4f4a1d4ea7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:04:53.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5430" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":434,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:04:53.676: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/node.k8s.io �[1mSTEP�[0m: getting /apis/node.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: watching Jan 1 15:04:53.729: INFO: starting watch �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 1 15:04:53.750: INFO: waiting for watch events with expected annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:04:53.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-1155" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":15,"skipped":490,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 14:59:53.942: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 1 14:59:53.980: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:55.985: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 1 14:59:55.995: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:57.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 14:59:59.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:02.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:04.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:06.002: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:08.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:09.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:12.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:13.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:16.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:18.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:20.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:21.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:24.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:26.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:28.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:30.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:32.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:34.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:36.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:37.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:40.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:42.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:43.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:46.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:48.002: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:49.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:52.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:54.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:56.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:00:58.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:00.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:02.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:04.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:06.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:08.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:10.002: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:12.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:14.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:16.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:18.006: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:19.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:22.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:23.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:26.003: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:27.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:30.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:32.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:33.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:35.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:37.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:39.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:42.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:43.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:45.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:48.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:49.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:52.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:54.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:56.001: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:57.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:02:00.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:02:02.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:02:03.999: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:02:06.000: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:02:08.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:10.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:12.002: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:14.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:16.002: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:18.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:20.003: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:22.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:23.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:25.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:27.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:29.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:32.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:34.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:36.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:38.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:40.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:42.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:43.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:46.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:48.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:50.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:52.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:54.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:56.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:02:57.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:00.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:02.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:04.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:06.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:07.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:10.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:12.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:14.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:16.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:18.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:20.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:22.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:23.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:26.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:27.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:30.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:32.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:34.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:36.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:38.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:40.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:42.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:44.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:46.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:48.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:50.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:52.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:54.004: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:56.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:03:57.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:00.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:02.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:04.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:06.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:08.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:10.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:12.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:13.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:16.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:18.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:20.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:22.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:23.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:25.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:28.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:30.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:32.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:34.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:36.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:38.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:39.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:42.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:44.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:46.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:48.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:50.001: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:52.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:53.999: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:56.000: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:56.003: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:04:56.003: FAIL: Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc002b966d8, 0x8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc002d28000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0002bed00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:04:56.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-9237" for this suite. �[91m�[1m• Failure [302.070 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 15:04:56.003: Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:04:53.789: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 1 15:04:53.822: INFO: The status of Pod pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898 is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:04:55.826: INFO: The status of Pod pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 1 15:04:56.343: INFO: Successfully updated pod "pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898" Jan 1 15:04:56.343: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898" in namespace "pods-2770" to be "terminated due to deadline exceeded" Jan 1 15:04:56.346: INFO: Pod "pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898": Phase="Running", Reason="", readiness=true. Elapsed: 3.429246ms Jan 1 15:04:58.351: INFO: Pod "pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898": Phase="Running", Reason="", readiness=true. Elapsed: 2.007702454s Jan 1 15:05:00.355: INFO: Pod "pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.011921067s Jan 1 15:05:00.355: INFO: Pod "pod-update-activedeadlineseconds-8d1bf32a-b42f-4538-b269-c0c1184b8898" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:00.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-2770" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":496,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:00.387: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Jan 1 15:05:01.485: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-upqhfa-bk7tk-vbnvt is Running (Ready = true) Jan 1 15:05:01.545: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:01.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-6715" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":17,"skipped":508,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:01.573: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-939b34ee-2e00-4e8c-ab98-5a0ec74f3fc0 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 15:05:01.609: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286" in namespace "projected-1194" to be "Succeeded or Failed" Jan 1 15:05:01.614: INFO: Pod "pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286": Phase="Pending", Reason="", readiness=false. Elapsed: 5.264282ms Jan 1 15:05:03.618: INFO: Pod "pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00961507s Jan 1 15:05:05.623: INFO: Pod "pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013961837s �[1mSTEP�[0m: Saw pod success Jan 1 15:05:05.623: INFO: Pod "pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286" satisfied condition "Succeeded or Failed" Jan 1 15:05:05.625: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:05:05.638: INFO: Waiting for pod pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286 to disappear Jan 1 15:05:05.641: INFO: Pod pod-projected-configmaps-7016d672-e461-4f72-b08a-d6cbf4d8e286 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:05.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1194" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":520,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:05.692: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service clusterip-service with the type=ClusterIP in namespace services-3762 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-3762 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-3762 I0101 15:05:05.778707 15 runners.go:193] Created replication controller with name: externalsvc, namespace: services-3762, replica count: 2 I0101 15:05:08.829448 15 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the ClusterIP service to type=ExternalName Jan 1 15:05:08.849: INFO: Creating new exec pod Jan 1 15:05:10.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3762 exec execpodrwh7g -- /bin/sh -x -c nslookup clusterip-service.services-3762.svc.cluster.local' Jan 1 15:05:11.070: INFO: stderr: "+ nslookup clusterip-service.services-3762.svc.cluster.local\n" Jan 1 15:05:11.070: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nclusterip-service.services-3762.svc.cluster.local\tcanonical name = externalsvc.services-3762.svc.cluster.local.\nName:\texternalsvc.services-3762.svc.cluster.local\nAddress: 10.134.109.86\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-3762, will wait for the garbage collector to delete the pods Jan 1 15:05:11.130: INFO: Deleting ReplicationController externalsvc took: 5.311555ms Jan 1 15:05:11.230: INFO: Terminating ReplicationController externalsvc pods took: 100.279871ms Jan 1 15:05:12.843: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:12.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3762" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":19,"skipped":551,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:12.875: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-66688285-16b1-4885-9e52-ea89900838f7 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 1 15:05:12.905: INFO: Waiting up to 5m0s for pod "pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8" in namespace "secrets-365" to be "Succeeded or Failed" Jan 1 15:05:12.909: INFO: Pod "pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.014652ms Jan 1 15:05:14.912: INFO: Pod "pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006887684s Jan 1 15:05:16.917: INFO: Pod "pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01171519s �[1mSTEP�[0m: Saw pod success Jan 1 15:05:16.917: INFO: Pod "pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8" satisfied condition "Succeeded or Failed" Jan 1 15:05:16.920: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:05:16.937: INFO: Waiting for pod pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8 to disappear Jan 1 15:05:16.939: INFO: Pod pod-secrets-ccfb084b-bb66-43e8-9b75-853b3c2e59f8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:16.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-365" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":558,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:16.974: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4 Jan 1 15:05:17.007: INFO: Pod name my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4: Found 0 pods out of 1 Jan 1 15:05:22.011: INFO: Pod name my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4: Found 1 pods out of 1 Jan 1 15:05:22.011: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4" are running Jan 1 15:05:22.013: INFO: Pod "my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4-j268m" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-01 15:05:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-01 15:05:18 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-01 15:05:18 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-01 15:05:17 +0000 UTC Reason: Message:}]) Jan 1 15:05:22.013: INFO: Trying to dial the pod Jan 1 15:05:27.023: INFO: Controller my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4: Got expected result from replica 1 [my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4-j268m]: "my-hostname-basic-1c8340e2-2d16-4c78-935d-7d5a0f12b2f4-j268m", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:27.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-3657" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":21,"skipped":578,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:27.073: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 15:05:27.106: INFO: Waiting up to 5m0s for pod "busybox-user-65534-32353519-d308-4f34-9869-c9cd4aedbb8a" in namespace "security-context-test-1907" to be "Succeeded or Failed" Jan 1 15:05:27.108: INFO: Pod "busybox-user-65534-32353519-d308-4f34-9869-c9cd4aedbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493504ms Jan 1 15:05:29.115: INFO: Pod "busybox-user-65534-32353519-d308-4f34-9869-c9cd4aedbb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008851469s Jan 1 15:05:31.121: INFO: Pod "busybox-user-65534-32353519-d308-4f34-9869-c9cd4aedbb8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014929699s Jan 1 15:05:31.121: INFO: Pod "busybox-user-65534-32353519-d308-4f34-9869-c9cd4aedbb8a" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:31.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-1907" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":607,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:31.419: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 15:05:31.460: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63" in namespace "downward-api-6404" to be "Succeeded or Failed" Jan 1 15:05:31.464: INFO: Pod "downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393209ms Jan 1 15:05:33.471: INFO: Pod "downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011537278s Jan 1 15:05:35.475: INFO: Pod "downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015662465s �[1mSTEP�[0m: Saw pod success Jan 1 15:05:35.475: INFO: Pod "downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63" satisfied condition "Succeeded or Failed" Jan 1 15:05:35.478: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:05:35.496: INFO: Waiting for pod downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63 to disappear Jan 1 15:05:35.499: INFO: Pod downwardapi-volume-64813423-2a44-477f-8c1d-116215e7be63 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:35.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6404" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":753,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:35.518: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Jan 1 15:05:38.060: INFO: Successfully updated pod "adopt-release-82nrj" �[1mSTEP�[0m: Checking that the Job readopts the Pod Jan 1 15:05:38.060: INFO: Waiting up to 15m0s for pod "adopt-release-82nrj" in namespace "job-7984" to be "adopted" Jan 1 15:05:38.063: INFO: Pod "adopt-release-82nrj": Phase="Running", Reason="", readiness=true. Elapsed: 2.825872ms Jan 1 15:05:40.068: INFO: Pod "adopt-release-82nrj": Phase="Running", Reason="", readiness=true. Elapsed: 2.007288626s Jan 1 15:05:40.068: INFO: Pod "adopt-release-82nrj" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Jan 1 15:05:40.582: INFO: Successfully updated pod "adopt-release-82nrj" �[1mSTEP�[0m: Checking that the Job releases the Pod Jan 1 15:05:40.582: INFO: Waiting up to 15m0s for pod "adopt-release-82nrj" in namespace "job-7984" to be "released" Jan 1 15:05:40.586: INFO: Pod "adopt-release-82nrj": Phase="Running", Reason="", readiness=true. Elapsed: 4.465986ms Jan 1 15:05:42.590: INFO: Pod "adopt-release-82nrj": Phase="Running", Reason="", readiness=true. Elapsed: 2.008727661s Jan 1 15:05:42.590: INFO: Pod "adopt-release-82nrj" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:42.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-7984" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":24,"skipped":760,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:42.665: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 1 15:05:42.696: INFO: Waiting up to 5m0s for pod "downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2" in namespace "downward-api-1228" to be "Succeeded or Failed" Jan 1 15:05:42.701: INFO: Pod "downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.540665ms Jan 1 15:05:44.705: INFO: Pod "downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2": Phase="Running", Reason="", readiness=false. Elapsed: 2.008612226s Jan 1 15:05:46.709: INFO: Pod "downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012927113s �[1mSTEP�[0m: Saw pod success Jan 1 15:05:46.709: INFO: Pod "downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2" satisfied condition "Succeeded or Failed" Jan 1 15:05:46.712: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:05:46.733: INFO: Waiting for pod downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2 to disappear Jan 1 15:05:46.735: INFO: Pod downward-api-3d3f994f-80cb-45ea-944f-4147e75fd1b2 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:46.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1228" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":794,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:46.831: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:46.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6047" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":26,"skipped":858,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:46.885: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 1 15:05:46.916: INFO: Waiting up to 5m0s for pod "pod-864f1f3b-f30a-4c73-bcf8-96753d099c27" in namespace "emptydir-3172" to be "Succeeded or Failed" Jan 1 15:05:46.919: INFO: Pod "pod-864f1f3b-f30a-4c73-bcf8-96753d099c27": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015482ms Jan 1 15:05:48.923: INFO: Pod "pod-864f1f3b-f30a-4c73-bcf8-96753d099c27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007089721s Jan 1 15:05:50.927: INFO: Pod "pod-864f1f3b-f30a-4c73-bcf8-96753d099c27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010835876s �[1mSTEP�[0m: Saw pod success Jan 1 15:05:50.927: INFO: Pod "pod-864f1f3b-f30a-4c73-bcf8-96753d099c27" satisfied condition "Succeeded or Failed" Jan 1 15:05:50.930: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod pod-864f1f3b-f30a-4c73-bcf8-96753d099c27 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:05:50.948: INFO: Waiting for pod pod-864f1f3b-f30a-4c73-bcf8-96753d099c27 to disappear Jan 1 15:05:50.951: INFO: Pod pod-864f1f3b-f30a-4c73-bcf8-96753d099c27 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:05:50.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3172" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":867,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:05:50.978: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 15:05:51.435: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 15:05:54.453: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting timeout (1s) shorter than webhook latency (5s) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) �[1mSTEP�[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is longer than webhook latency �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is empty (defaulted to 10s in v1) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:06.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3257" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3257-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":28,"skipped":879,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:06.706: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap that has name configmap-test-emptyKey-ac7306c8-0fe2-4f08-9353-97e21faca5e7 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:06.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4818" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":29,"skipped":882,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:06.780: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 15:06:06.836: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18" in namespace "downward-api-8740" to be "Succeeded or Failed" Jan 1 15:06:06.840: INFO: Pod "downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18": Phase="Pending", Reason="", readiness=false. Elapsed: 3.954539ms Jan 1 15:06:08.845: INFO: Pod "downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008820467s Jan 1 15:06:10.849: INFO: Pod "downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013678354s �[1mSTEP�[0m: Saw pod success Jan 1 15:06:10.849: INFO: Pod "downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18" satisfied condition "Succeeded or Failed" Jan 1 15:06:10.853: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:06:10.874: INFO: Waiting for pod downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18 to disappear Jan 1 15:06:10.877: INFO: Pod downwardapi-volume-65c0c5a2-9271-4155-b744-48cb761f6a18 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:10.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8740" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":887,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:10.956: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 1 15:06:10.992: INFO: The status of Pod annotationupdate7f4b54ec-c32d-4078-a8b2-9f7a2989aebd is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:12.999: INFO: The status of Pod annotationupdate7f4b54ec-c32d-4078-a8b2-9f7a2989aebd is Running (Ready = true) Jan 1 15:06:13.526: INFO: Successfully updated pod "annotationupdate7f4b54ec-c32d-4078-a8b2-9f7a2989aebd" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:17.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9708" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":925,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:17.646: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if v1 is in available api versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating api versions Jan 1 15:06:17.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8768 api-versions' Jan 1 15:06:17.768: INFO: stderr: "" Jan 1 15:06:17.768: INFO: stdout: "admissionregistration.k8s.io/v1\napiextensions.k8s.io/v1\napiregistration.k8s.io/v1\napps/v1\nauthentication.k8s.io/v1\nauthorization.k8s.io/v1\nautoscaling/v1\nautoscaling/v2\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncoordination.k8s.io/v1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta2\nnetworking.k8s.io/v1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nscheduling.k8s.io/v1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:17.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8768" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":-1,"completed":32,"skipped":974,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:17.795: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 15:06:18.376: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 15:06:21.398: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:21.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6822" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6822-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":33,"skipped":985,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:21.568: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override all Jan 1 15:06:21.603: INFO: Waiting up to 5m0s for pod "client-containers-e6392c6e-d841-408d-b44d-191c3f62df44" in namespace "containers-3757" to be "Succeeded or Failed" Jan 1 15:06:21.609: INFO: Pod "client-containers-e6392c6e-d841-408d-b44d-191c3f62df44": Phase="Pending", Reason="", readiness=false. Elapsed: 5.218862ms Jan 1 15:06:23.613: INFO: Pod "client-containers-e6392c6e-d841-408d-b44d-191c3f62df44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009423624s Jan 1 15:06:25.617: INFO: Pod "client-containers-e6392c6e-d841-408d-b44d-191c3f62df44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013597211s �[1mSTEP�[0m: Saw pod success Jan 1 15:06:25.617: INFO: Pod "client-containers-e6392c6e-d841-408d-b44d-191c3f62df44" satisfied condition "Succeeded or Failed" Jan 1 15:06:25.620: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-worker-9emfga pod client-containers-e6392c6e-d841-408d-b44d-191c3f62df44 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:06:25.636: INFO: Waiting for pod client-containers-e6392c6e-d841-408d-b44d-191c3f62df44 to disappear Jan 1 15:06:25.639: INFO: Pod client-containers-e6392c6e-d841-408d-b44d-191c3f62df44 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:25.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-3757" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":1017,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:25.654: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override command Jan 1 15:06:25.690: INFO: Waiting up to 5m0s for pod "client-containers-90953167-77c9-4d37-b8f9-e375149fe469" in namespace "containers-4827" to be "Succeeded or Failed" Jan 1 15:06:25.695: INFO: Pod "client-containers-90953167-77c9-4d37-b8f9-e375149fe469": Phase="Pending", Reason="", readiness=false. Elapsed: 4.540557ms Jan 1 15:06:27.699: INFO: Pod "client-containers-90953167-77c9-4d37-b8f9-e375149fe469": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009069733s Jan 1 15:06:29.705: INFO: Pod "client-containers-90953167-77c9-4d37-b8f9-e375149fe469": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0146888s �[1mSTEP�[0m: Saw pod success Jan 1 15:06:29.705: INFO: Pod "client-containers-90953167-77c9-4d37-b8f9-e375149fe469" satisfied condition "Succeeded or Failed" Jan 1 15:06:29.708: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod client-containers-90953167-77c9-4d37-b8f9-e375149fe469 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:06:29.724: INFO: Waiting for pod client-containers-90953167-77c9-4d37-b8f9-e375149fe469 to disappear Jan 1 15:06:29.729: INFO: Pod client-containers-90953167-77c9-4d37-b8f9-e375149fe469 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:29.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-4827" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":1018,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:29.783: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-ea126cdb-0041-40fa-9aa9-501228b2c0a8 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 15:06:29.824: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd" in namespace "projected-7468" to be "Succeeded or Failed" Jan 1 15:06:29.828: INFO: Pod "pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.896133ms Jan 1 15:06:31.832: INFO: Pod "pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008168414s Jan 1 15:06:33.840: INFO: Pod "pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016688274s �[1mSTEP�[0m: Saw pod success Jan 1 15:06:33.841: INFO: Pod "pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd" satisfied condition "Succeeded or Failed" Jan 1 15:06:33.847: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:06:33.871: INFO: Waiting for pod pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd to disappear Jan 1 15:06:33.874: INFO: Pod pod-projected-configmaps-04747c8d-bda6-41ff-b89d-384b577507fd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:33.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7468" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":1049,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:33.894: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-map-68afebde-c5c5-4f7d-bc82-7290fc3a667e �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 1 15:06:33.948: INFO: Waiting up to 5m0s for pod "pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba" in namespace "secrets-587" to be "Succeeded or Failed" Jan 1 15:06:33.954: INFO: Pod "pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba": Phase="Pending", Reason="", readiness=false. Elapsed: 5.129575ms Jan 1 15:06:35.959: INFO: Pod "pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010466072s Jan 1 15:06:37.966: INFO: Pod "pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016979315s �[1mSTEP�[0m: Saw pod success Jan 1 15:06:37.966: INFO: Pod "pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba" satisfied condition "Succeeded or Failed" Jan 1 15:06:37.970: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:06:37.994: INFO: Waiting for pod pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba to disappear Jan 1 15:06:37.998: INFO: Pod pod-secrets-c654b3b4-1e75-428c-a3c9-ed04b0498eba no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:37.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-587" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":1050,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:38.042: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 1 15:06:38.103: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:40.110: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 1 15:06:40.128: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:42.135: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 1 15:06:42.160: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 15:06:42.166: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 15:06:44.167: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 15:06:44.173: INFO: Pod pod-with-poststart-http-hook still exists Jan 1 15:06:46.168: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jan 1 15:06:46.174: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:46.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-2042" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":1059,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:46.206: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 �[1mSTEP�[0m: creating an pod Jan 1 15:06:46.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 1 15:06:46.721: INFO: stderr: "" Jan 1 15:06:46.721: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for log generator to start. Jan 1 15:06:46.721: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 1 15:06:46.721: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-3785" to be "running and ready, or succeeded" Jan 1 15:06:46.728: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.857805ms Jan 1 15:06:48.735: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.013614906s Jan 1 15:06:48.735: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 1 15:06:48.735: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 1 15:06:48.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 logs logs-generator logs-generator' Jan 1 15:06:48.894: INFO: stderr: "" Jan 1 15:06:48.894: INFO: stdout: "I0101 15:06:47.613915 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/8kv 444\nI0101 15:06:47.814256 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/j6rq 590\nI0101 15:06:48.014173 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/5ww5 587\nI0101 15:06:48.214717 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/4vdl 344\nI0101 15:06:48.414143 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/42wk 504\nI0101 15:06:48.614732 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5bh 489\nI0101 15:06:48.814157 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/c4m 268\n" �[1mSTEP�[0m: limiting log lines Jan 1 15:06:48.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 logs logs-generator logs-generator --tail=1' Jan 1 15:06:49.049: INFO: stderr: "" Jan 1 15:06:49.049: INFO: stdout: "I0101 15:06:49.015168 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/smz 317\n" Jan 1 15:06:49.049: INFO: got output "I0101 15:06:49.015168 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/smz 317\n" �[1mSTEP�[0m: limiting log bytes Jan 1 15:06:49.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 logs logs-generator logs-generator --limit-bytes=1' Jan 1 15:06:49.259: INFO: stderr: "" Jan 1 15:06:49.259: INFO: stdout: "I" Jan 1 15:06:49.259: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Jan 1 15:06:49.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 logs logs-generator logs-generator --tail=1 --timestamps' Jan 1 15:06:49.413: INFO: stderr: "" Jan 1 15:06:49.413: INFO: stdout: "2023-01-01T15:06:49.215477937Z I0101 15:06:49.215102 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/9hk 372\n" Jan 1 15:06:49.414: INFO: got output "2023-01-01T15:06:49.215477937Z I0101 15:06:49.215102 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/9hk 372\n" �[1mSTEP�[0m: restricting to a time range Jan 1 15:06:51.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 logs logs-generator logs-generator --since=1s' Jan 1 15:06:52.071: INFO: stderr: "" Jan 1 15:06:52.071: INFO: stdout: "I0101 15:06:51.215162 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/rxw 332\nI0101 15:06:51.414970 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/tsl 297\nI0101 15:06:51.614609 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/c7t 483\nI0101 15:06:51.815006 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/6sz 272\nI0101 15:06:52.014707 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/z5h 409\n" Jan 1 15:06:52.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 logs logs-generator logs-generator --since=24h' Jan 1 15:06:52.228: INFO: stderr: "" Jan 1 15:06:52.228: INFO: stdout: "I0101 15:06:47.613915 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/8kv 444\nI0101 15:06:47.814256 1 logs_generator.go:76] 1 GET /api/v1/namespaces/kube-system/pods/j6rq 590\nI0101 15:06:48.014173 1 logs_generator.go:76] 2 GET /api/v1/namespaces/default/pods/5ww5 587\nI0101 15:06:48.214717 1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/4vdl 344\nI0101 15:06:48.414143 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/42wk 504\nI0101 15:06:48.614732 1 logs_generator.go:76] 5 GET /api/v1/namespaces/kube-system/pods/5bh 489\nI0101 15:06:48.814157 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/kube-system/pods/c4m 268\nI0101 15:06:49.015168 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/smz 317\nI0101 15:06:49.215102 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/default/pods/9hk 372\nI0101 15:06:49.414631 1 logs_generator.go:76] 9 GET /api/v1/namespaces/kube-system/pods/7wb 352\nI0101 15:06:49.614074 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/tfr6 349\nI0101 15:06:49.814680 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/jrx 410\nI0101 15:06:50.014063 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/kbdz 413\nI0101 15:06:50.214671 1 logs_generator.go:76] 13 POST /api/v1/namespaces/ns/pods/mb4 538\nI0101 15:06:50.414228 1 logs_generator.go:76] 14 GET /api/v1/namespaces/default/pods/frb5 385\nI0101 15:06:50.615290 1 logs_generator.go:76] 15 GET /api/v1/namespaces/kube-system/pods/dvq 521\nI0101 15:06:50.814725 1 logs_generator.go:76] 16 GET /api/v1/namespaces/default/pods/7xz 598\nI0101 15:06:51.014315 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/2s9l 239\nI0101 15:06:51.215162 1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/rxw 332\nI0101 15:06:51.414970 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/tsl 297\nI0101 15:06:51.614609 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/c7t 483\nI0101 15:06:51.815006 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/6sz 272\nI0101 15:06:52.014707 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/z5h 409\nI0101 15:06:52.214236 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/fcn 235\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Jan 1 15:06:52.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3785 delete pod logs-generator' Jan 1 15:06:52.835: INFO: stderr: "" Jan 1 15:06:52.835: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:06:52.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3785" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":39,"skipped":1064,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:06:52.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 1 15:06:52.941: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:54.948: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 1 15:06:54.964: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:56.972: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 1 15:06:56.988: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 1 15:06:56.996: INFO: Pod pod-with-prestop-exec-hook still exists Jan 1 15:06:58.997: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 1 15:06:59.005: INFO: Pod pod-with-prestop-exec-hook still exists Jan 1 15:07:00.997: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 1 15:07:01.006: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:07:01.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-2326" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":1071,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:07:01.073: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 15:07:01.136: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64" in namespace "projected-5581" to be "Succeeded or Failed" Jan 1 15:07:01.141: INFO: Pod "downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64": Phase="Pending", Reason="", readiness=false. Elapsed: 4.645135ms Jan 1 15:07:03.148: INFO: Pod "downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011132844s Jan 1 15:07:05.155: INFO: Pod "downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018498674s �[1mSTEP�[0m: Saw pod success Jan 1 15:07:05.155: INFO: Pod "downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64" satisfied condition "Succeeded or Failed" Jan 1 15:07:05.160: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:07:05.181: INFO: Waiting for pod downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64 to disappear Jan 1 15:07:05.186: INFO: Pod downwardapi-volume-99951675-cc27-4c50-848f-be6d96c3dc64 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:07:05.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5581" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":1083,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:07:05.222: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename tables �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:07:05.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "tables-7515" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":42,"skipped":1093,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:07:05.482: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Jan 1 15:07:05.537: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3956 6376610a-60ae-4d32-b38b-a4b6c1205325 13439 0 2023-01-01 15:07:05 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-01 15:07:05 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 1 15:07:05.537: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3956 6376610a-60ae-4d32-b38b-a4b6c1205325 13440 0 2023-01-01 15:07:05 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-01 15:07:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 1 15:07:05.561: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3956 6376610a-60ae-4d32-b38b-a4b6c1205325 13441 0 2023-01-01 15:07:05 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-01 15:07:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 1 15:07:05.561: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3956 6376610a-60ae-4d32-b38b-a4b6c1205325 13442 0 2023-01-01 15:07:05 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-01 15:07:05 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:07:05.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-3956" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":43,"skipped":1177,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:07:05.668: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-68mz �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 1 15:07:05.734: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-68mz" in namespace "subpath-1021" to be "Succeeded or Failed" Jan 1 15:07:05.742: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Pending", Reason="", readiness=false. Elapsed: 8.210675ms Jan 1 15:07:07.749: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 2.014692965s Jan 1 15:07:09.755: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 4.020738319s Jan 1 15:07:11.761: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 6.026495614s Jan 1 15:07:13.767: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 8.032489949s Jan 1 15:07:15.773: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 10.038946001s Jan 1 15:07:17.777: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 12.043429585s Jan 1 15:07:19.784: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 14.050001964s Jan 1 15:07:21.790: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 16.056412137s Jan 1 15:07:23.797: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 18.063381743s Jan 1 15:07:25.802: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=true. Elapsed: 20.067764417s Jan 1 15:07:27.808: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Running", Reason="", readiness=false. Elapsed: 22.073922002s Jan 1 15:07:29.816: INFO: Pod "pod-subpath-test-configmap-68mz": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.081910946s �[1mSTEP�[0m: Saw pod success Jan 1 15:07:29.816: INFO: Pod "pod-subpath-test-configmap-68mz" satisfied condition "Succeeded or Failed" Jan 1 15:07:29.820: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-subpath-test-configmap-68mz container test-container-subpath-configmap-68mz: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:07:29.858: INFO: Waiting for pod pod-subpath-test-configmap-68mz to disappear Jan 1 15:07:29.862: INFO: Pod pod-subpath-test-configmap-68mz no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-68mz Jan 1 15:07:29.862: INFO: Deleting pod "pod-subpath-test-configmap-68mz" in namespace "subpath-1021" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:07:29.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-1021" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":44,"skipped":1215,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:07:29.943: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 15:07:30.028: INFO: created pod Jan 1 15:07:30.028: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-2573" to be "Succeeded or Failed" Jan 1 15:07:30.048: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 20.715442ms Jan 1 15:07:32.054: INFO: Pod "oidc-discovery-validator": Phase="Running", Reason="", readiness=false. Elapsed: 2.02605069s Jan 1 15:07:34.061: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032997686s �[1mSTEP�[0m: Saw pod success Jan 1 15:07:34.061: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jan 1 15:08:04.061: INFO: polling logs Jan 1 15:08:04.071: INFO: Pod logs: I0101 15:07:30.985898 1 log.go:195] OK: Got token I0101 15:07:30.986069 1 log.go:195] validating with in-cluster discovery I0101 15:07:30.986834 1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local I0101 15:07:30.986955 1 log.go:195] Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-2573:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1672586250, NotBefore:1672585650, IssuedAt:1672585650, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2573", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"3d19d761-82f4-4e22-ab8d-77344b680f96"}}} I0101 15:07:31.030863 1 log.go:195] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local I0101 15:07:31.043248 1 log.go:195] OK: Validated signature on JWT I0101 15:07:31.043819 1 log.go:195] OK: Got valid claims from token! I0101 15:07:31.044124 1 log.go:195] Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-2573:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1672586250, NotBefore:1672585650, IssuedAt:1672585650, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-2573", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"3d19d761-82f4-4e22-ab8d-77344b680f96"}}} Jan 1 15:08:04.071: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:04.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-2573" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":45,"skipped":1238,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:04.130: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-dc2c7bac-4acd-4bd2-ad43-34ad4245a709 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 15:08:04.180: INFO: Waiting up to 5m0s for pod "pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877" in namespace "configmap-5039" to be "Succeeded or Failed" Jan 1 15:08:04.186: INFO: Pod "pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877": Phase="Pending", Reason="", readiness=false. Elapsed: 6.237669ms Jan 1 15:08:06.195: INFO: Pod "pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877": Phase="Running", Reason="", readiness=false. Elapsed: 2.01531629s Jan 1 15:08:08.201: INFO: Pod "pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021454637s �[1mSTEP�[0m: Saw pod success Jan 1 15:08:08.202: INFO: Pod "pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877" satisfied condition "Succeeded or Failed" Jan 1 15:08:08.207: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:08:08.231: INFO: Waiting for pod pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877 to disappear Jan 1 15:08:08.235: INFO: Pod pod-configmaps-86b710f3-130d-4223-8360-aa74db38f877 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:08.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5039" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":1255,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:08.389: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating cluster-info Jan 1 15:08:08.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-222 cluster-info' Jan 1 15:08:08.577: INFO: stderr: "" Jan 1 15:08:08.577: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:08.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-222" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":47,"skipped":1313,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:08.679: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:08.712: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-7726 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:14.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-2020" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:14.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7726" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":48,"skipped":1347,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:14.971: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:15.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-6339" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":49,"skipped":1391,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:15.201: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 15:08:15.251: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae" in namespace "projected-7704" to be "Succeeded or Failed" Jan 1 15:08:15.256: INFO: Pod "downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.66293ms Jan 1 15:08:17.263: INFO: Pod "downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011558458s Jan 1 15:08:19.271: INFO: Pod "downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020286875s �[1mSTEP�[0m: Saw pod success Jan 1 15:08:19.272: INFO: Pod "downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae" satisfied condition "Succeeded or Failed" Jan 1 15:08:19.280: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb pod downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:08:19.326: INFO: Waiting for pod downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae to disappear Jan 1 15:08:19.337: INFO: Pod downwardapi-volume-5e24058d-6054-401d-8d88-6fd540cec8ae no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:19.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7704" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1430,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:19.370: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5943.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 1 15:08:21.447: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.453: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.463: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.469: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.475: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.480: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.488: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.493: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:21.493: INFO: Lookups using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local] Jan 1 15:08:26.500: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.508: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.516: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.523: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.530: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.536: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.542: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.547: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:26.547: INFO: Lookups using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local] Jan 1 15:08:31.499: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.505: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.511: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.519: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.529: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.535: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.541: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.545: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:31.545: INFO: Lookups using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local] Jan 1 15:08:36.501: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.506: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.511: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.517: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.521: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.527: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.531: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.536: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:36.536: INFO: Lookups using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local] Jan 1 15:08:41.502: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.509: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.517: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.523: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.527: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.535: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.551: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.557: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:41.557: INFO: Lookups using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local] Jan 1 15:08:46.503: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.508: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.513: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.518: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.523: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.529: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.534: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.539: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:46.539: INFO: Lookups using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5943.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local] Jan 1 15:08:51.527: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:51.531: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:51.535: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:51.540: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local from pod dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7: the server could not find the requested resource (get pods dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7) Jan 1 15:08:51.540: INFO: Lookups using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 failed for: [jessie_udp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5943.svc.cluster.local jessie_udp@dns-test-service-2.dns-5943.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5943.svc.cluster.local] Jan 1 15:08:56.536: INFO: DNS probes using dns-5943/dns-test-f0771d83-ea7b-4aa3-a430-7cfff682cbd7 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:56.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5943" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":-1,"completed":51,"skipped":1439,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:56.671: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token Jan 1 15:08:57.288: INFO: created pod pod-service-account-defaultsa Jan 1 15:08:57.288: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 1 15:08:57.294: INFO: created pod pod-service-account-mountsa Jan 1 15:08:57.294: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 1 15:08:57.307: INFO: created pod pod-service-account-nomountsa Jan 1 15:08:57.307: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 1 15:08:57.320: INFO: created pod pod-service-account-defaultsa-mountspec Jan 1 15:08:57.320: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 1 15:08:57.327: INFO: created pod pod-service-account-mountsa-mountspec Jan 1 15:08:57.328: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 1 15:08:57.337: INFO: created pod pod-service-account-nomountsa-mountspec Jan 1 15:08:57.338: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 1 15:08:57.356: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 1 15:08:57.356: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 1 15:08:57.386: INFO: created pod pod-service-account-mountsa-nomountspec Jan 1 15:08:57.386: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 1 15:08:57.414: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 1 15:08:57.414: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:08:57.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-6190" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":52,"skipped":1454,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:08:57.470: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-76732999-f6a3-41d4-819a-7838be5cddf1 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 1 15:08:57.560: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26" in namespace "projected-7902" to be "Succeeded or Failed" Jan 1 15:08:57.566: INFO: Pod "pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531317ms Jan 1 15:08:59.572: INFO: Pod "pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012246309s Jan 1 15:09:01.577: INFO: Pod "pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017793332s �[1mSTEP�[0m: Saw pod success Jan 1 15:09:01.577: INFO: Pod "pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26" satisfied condition "Succeeded or Failed" Jan 1 15:09:01.582: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:09:01.603: INFO: Waiting for pod pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26 to disappear Jan 1 15:09:01.605: INFO: Pod pod-projected-configmaps-95235ddf-04db-47a2-af79-59b05e66cb26 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:09:01.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7902" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1459,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:09:01.680: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 1 15:09:02.537: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 1 15:09:05.574: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API �[1mSTEP�[0m: create a namespace for the webhook �[1mSTEP�[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:09:05.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1134" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1134-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":54,"skipped":1484,"failed":3,"failures":["[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]","[sig-network] Services should serve multiport endpoints from pods [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":235,"failed":3,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:04:56.015: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 1 15:04:56.050: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:04:58.055: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 1 15:04:58.065: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:00.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:02.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:04.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:06.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:08.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:10.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:12.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:14.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:16.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:18.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:20.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:22.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:24.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:26.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:28.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:30.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:32.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:34.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:36.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:38.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:40.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:42.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:44.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:46.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:48.069: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:50.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:52.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:54.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:56.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:05:58.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:00.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:02.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:04.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:06.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:08.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:10.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:12.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:14.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:16.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:18.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:20.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:22.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:24.075: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:26.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:28.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:30.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:32.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:34.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:36.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:38.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:40.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:42.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:44.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:46.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:48.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:50.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:52.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:54.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:56.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:06:58.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:00.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:02.077: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:04.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:06.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:08.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:10.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:12.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:14.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:16.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:18.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:20.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:22.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:24.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:26.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:28.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:30.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:32.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:34.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:36.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:38.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:40.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:42.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:44.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:46.075: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:48.075: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:50.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:52.074: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:54.074: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:56.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:07:58.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:00.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:02.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:04.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:06.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:08.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:10.075: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:12.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:14.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:16.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:18.073: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:20.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:22.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:24.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:26.074: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:28.072: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:30.071: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:32.070: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:08:34.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:36.072: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:38.075: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:40.072: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:42.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:44.071: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:46.072: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:48.072: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:50.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:52.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:54.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:56.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:08:58.079: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:00.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:02.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:04.075: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:06.075: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:08.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:10.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:12.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:14.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:16.075: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:18.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:20.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:22.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:24.076: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:26.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:28.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:30.083: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:32.076: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:34.076: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:36.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:38.071: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:40.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:42.074: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:44.072: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:46.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:48.075: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:50.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:52.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:54.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:56.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:58.073: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:58.080: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 1 15:09:58.081: FAIL: Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc0037f7bf0, 0x7f2bde45ea68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc0008c5800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0002bed00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:09:58.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-3597" for this suite. �[91m�[1m• Failure [302.081 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 15:09:58.081: Unexpected error: <*errors.errorString | 0xc0002c82c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:03:49.783: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 1 15:03:49.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 create -f -' Jan 1 15:03:50.789: INFO: stderr: "" Jan 1 15:03:50.789: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 1 15:03:50.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 1 15:03:50.892: INFO: stderr: "" Jan 1 15:03:50.892: INFO: stdout: "update-demo-nautilus-fwgzm update-demo-nautilus-pmzbf " Jan 1 15:03:50.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods update-demo-nautilus-fwgzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 1 15:03:50.975: INFO: stderr: "" Jan 1 15:03:50.975: INFO: stdout: "" Jan 1 15:03:50.975: INFO: update-demo-nautilus-fwgzm is created but not running Jan 1 15:03:55.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 1 15:03:56.084: INFO: stderr: "" Jan 1 15:03:56.084: INFO: stdout: "update-demo-nautilus-fwgzm update-demo-nautilus-pmzbf " Jan 1 15:03:56.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods update-demo-nautilus-fwgzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 1 15:03:56.199: INFO: stderr: "" Jan 1 15:03:56.199: INFO: stdout: "true" Jan 1 15:03:56.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods update-demo-nautilus-fwgzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 1 15:03:56.268: INFO: stderr: "" Jan 1 15:03:56.268: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 1 15:03:56.268: INFO: validating pod update-demo-nautilus-fwgzm Jan 1 15:07:30.657: INFO: update-demo-nautilus-fwgzm is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-fwgzm) Jan 1 15:07:35.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 1 15:07:35.824: INFO: stderr: "" Jan 1 15:07:35.824: INFO: stdout: "update-demo-nautilus-fwgzm update-demo-nautilus-pmzbf " Jan 1 15:07:35.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods update-demo-nautilus-fwgzm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 1 15:07:35.957: INFO: stderr: "" Jan 1 15:07:35.957: INFO: stdout: "true" Jan 1 15:07:35.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods update-demo-nautilus-fwgzm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 1 15:07:36.091: INFO: stderr: "" Jan 1 15:07:36.091: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 1 15:07:36.091: INFO: validating pod update-demo-nautilus-fwgzm Jan 1 15:11:09.793: INFO: update-demo-nautilus-fwgzm is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-fwgzm) Jan 1 15:11:14.795: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 +0x225 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00024cb60, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Jan 1 15:11:14.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 delete --grace-period=0 --force -f -' Jan 1 15:11:14.967: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 1 15:11:14.967: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 1 15:11:14.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get rc,svc -l name=update-demo --no-headers' Jan 1 15:11:15.140: INFO: stderr: "No resources found in kubectl-8753 namespace.\n" Jan 1 15:11:15.140: INFO: stdout: "" Jan 1 15:11:15.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8753 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 1 15:11:15.289: INFO: stderr: "" Jan 1 15:11:15.289: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:15.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8753" for this suite. �[91m�[1m• Failure [445.526 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould create and stop a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 15:11:14.795: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:09:05.892: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-9634 �[1mSTEP�[0m: creating service affinity-nodeport-transition in namespace services-9634 �[1mSTEP�[0m: creating replication controller affinity-nodeport-transition in namespace services-9634 I0101 15:09:05.974393 15 runners.go:193] Created replication controller with name: affinity-nodeport-transition, namespace: services-9634, replica count: 3 I0101 15:09:09.024822 15 runners.go:193] affinity-nodeport-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 1 15:09:09.041: INFO: Creating new exec pod Jan 1 15:09:12.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:14.342: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:14.342: INFO: stdout: "" Jan 1 15:09:15.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:17.667: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:17.667: INFO: stdout: "" Jan 1 15:09:18.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:20.588: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:20.588: INFO: stdout: "" Jan 1 15:09:21.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:23.636: INFO: stderr: "+ echo hostName+ nc -v -t -w 2 affinity-nodeport-transition 80\n\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:23.636: INFO: stdout: "" Jan 1 15:09:24.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:26.648: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:26.648: INFO: stdout: "" Jan 1 15:09:27.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:29.625: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:29.625: INFO: stdout: "" Jan 1 15:09:30.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:32.691: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:32.691: INFO: stdout: "" Jan 1 15:09:33.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:35.666: INFO: stderr: "+ + nc -vecho -t -w 2 affinity-nodeport-transition hostName 80\n\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:35.666: INFO: stdout: "" Jan 1 15:09:36.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:38.631: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:38.631: INFO: stdout: "" Jan 1 15:09:39.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:41.655: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:41.655: INFO: stdout: "" Jan 1 15:09:42.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:44.621: INFO: stderr: "+ nc -v -t -w 2 affinity-nodeport-transition 80\n+ echo hostName\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:44.621: INFO: stdout: "" Jan 1 15:09:45.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:47.661: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:47.661: INFO: stdout: "" Jan 1 15:09:48.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:50.635: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:50.635: INFO: stdout: "" Jan 1 15:09:51.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:53.660: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:53.660: INFO: stdout: "" Jan 1 15:09:54.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:56.652: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:56.652: INFO: stdout: "" Jan 1 15:09:57.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:09:59.671: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:09:59.671: INFO: stdout: "" Jan 1 15:10:00.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:02.644: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:02.644: INFO: stdout: "" Jan 1 15:10:03.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:05.687: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:05.687: INFO: stdout: "" Jan 1 15:10:06.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:08.644: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:08.644: INFO: stdout: "" Jan 1 15:10:09.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:11.613: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:11.613: INFO: stdout: "" Jan 1 15:10:12.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:14.618: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:14.618: INFO: stdout: "" Jan 1 15:10:15.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:17.625: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:17.625: INFO: stdout: "" Jan 1 15:10:18.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:20.594: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:20.595: INFO: stdout: "" Jan 1 15:10:21.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:23.608: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:23.608: INFO: stdout: "" Jan 1 15:10:24.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:26.646: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:26.647: INFO: stdout: "" Jan 1 15:10:27.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:29.608: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:29.608: INFO: stdout: "" Jan 1 15:10:30.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:32.633: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:32.634: INFO: stdout: "" Jan 1 15:10:33.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:35.615: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:35.616: INFO: stdout: "" Jan 1 15:10:36.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:38.634: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:38.634: INFO: stdout: "" Jan 1 15:10:39.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:41.627: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:41.627: INFO: stdout: "" Jan 1 15:10:42.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:44.667: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:44.667: INFO: stdout: "" Jan 1 15:10:45.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:47.639: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:47.639: INFO: stdout: "" Jan 1 15:10:48.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:50.650: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:50.650: INFO: stdout: "" Jan 1 15:10:51.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:53.655: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:53.655: INFO: stdout: "" Jan 1 15:10:54.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:56.637: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:56.637: INFO: stdout: "" Jan 1 15:10:57.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:10:59.645: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:10:59.645: INFO: stdout: "" Jan 1 15:11:00.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:11:02.648: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:11:02.648: INFO: stdout: "" Jan 1 15:11:03.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:11:05.630: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:11:05.631: INFO: stdout: "" Jan 1 15:11:06.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:11:08.636: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:11:08.636: INFO: stdout: "" Jan 1 15:11:09.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:11:11.621: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:11:11.621: INFO: stdout: "" Jan 1 15:11:12.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:11:14.621: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:11:14.622: INFO: stdout: "" Jan 1 15:11:14.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9634 exec execpod-affinityj6gst -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-transition 80' Jan 1 15:11:16.896: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-transition 80\nConnection to affinity-nodeport-transition 80 port [tcp/http] succeeded!\n" Jan 1 15:11:16.896: INFO: stdout: "" Jan 1 15:11:16.897: FAIL: Unexpected error: <*errors.errorString | 0xc0049a8540>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x71422ce, {0x7b06bd0, 0xc0040c8480}, 0xc00221b680, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3262 k8s.io/kubernetes/test/e2e/network.glob..func24.29() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2147 +0x90 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0006036c0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Jan 1 15:11:16.897: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-transition in namespace services-9634, will wait for the garbage collector to delete the pods Jan 1 15:11:16.997: INFO: Deleting ReplicationController affinity-nodeport-transition took: 19.609548ms Jan 1 15:11:17.198: INFO: Terminating ReplicationController affinity-nodeport-transition pods took: 200.652928ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:19.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9634" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [133.941 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for NodePort service [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 1 15:11:16.897: Unexpected error: <*errors.errorString | 0xc0049a8540>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-nodeport-transition:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":20,"skipped":381,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:15.313: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 1 15:11:15.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 create -f -' Jan 1 15:11:15.845: INFO: stderr: "" Jan 1 15:11:15.845: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 1 15:11:15.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 1 15:11:16.024: INFO: stderr: "" Jan 1 15:11:16.025: INFO: stdout: "update-demo-nautilus-cb4db update-demo-nautilus-thglv " Jan 1 15:11:16.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods update-demo-nautilus-cb4db -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 1 15:11:16.179: INFO: stderr: "" Jan 1 15:11:16.179: INFO: stdout: "" Jan 1 15:11:16.179: INFO: update-demo-nautilus-cb4db is created but not running Jan 1 15:11:21.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 1 15:11:21.480: INFO: stderr: "" Jan 1 15:11:21.481: INFO: stdout: "update-demo-nautilus-cb4db update-demo-nautilus-thglv " Jan 1 15:11:21.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods update-demo-nautilus-cb4db -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 1 15:11:21.676: INFO: stderr: "" Jan 1 15:11:21.676: INFO: stdout: "" Jan 1 15:11:21.676: INFO: update-demo-nautilus-cb4db is created but not running Jan 1 15:11:26.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 1 15:11:26.805: INFO: stderr: "" Jan 1 15:11:26.805: INFO: stdout: "update-demo-nautilus-cb4db update-demo-nautilus-thglv " Jan 1 15:11:26.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods update-demo-nautilus-cb4db -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 1 15:11:26.914: INFO: stderr: "" Jan 1 15:11:26.914: INFO: stdout: "true" Jan 1 15:11:26.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods update-demo-nautilus-cb4db -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 1 15:11:27.011: INFO: stderr: "" Jan 1 15:11:27.011: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 1 15:11:27.011: INFO: validating pod update-demo-nautilus-cb4db Jan 1 15:11:27.020: INFO: got data: { "image": "nautilus.jpg" } Jan 1 15:11:27.020: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 15:11:27.020: INFO: update-demo-nautilus-cb4db is verified up and running Jan 1 15:11:27.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods update-demo-nautilus-thglv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 1 15:11:27.111: INFO: stderr: "" Jan 1 15:11:27.112: INFO: stdout: "true" Jan 1 15:11:27.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods update-demo-nautilus-thglv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 1 15:11:27.213: INFO: stderr: "" Jan 1 15:11:27.213: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 1 15:11:27.213: INFO: validating pod update-demo-nautilus-thglv Jan 1 15:11:27.218: INFO: got data: { "image": "nautilus.jpg" } Jan 1 15:11:27.218: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 1 15:11:27.218: INFO: update-demo-nautilus-thglv is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 1 15:11:27.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 delete --grace-period=0 --force -f -' Jan 1 15:11:27.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 1 15:11:27.316: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 1 15:11:27.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get rc,svc -l name=update-demo --no-headers' Jan 1 15:11:27.427: INFO: stderr: "No resources found in kubectl-7410 namespace.\n" Jan 1 15:11:27.427: INFO: stdout: "" Jan 1 15:11:27.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7410 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 1 15:11:27.537: INFO: stderr: "" Jan 1 15:11:27.537: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:27.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7410" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":21,"skipped":381,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:27.554: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:27.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7737" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":22,"skipped":384,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:27.894: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 1 15:11:27.911: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:28.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-5525" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":23,"skipped":392,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:28.482: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-4765bdba-649b-4de4-8fe5-a4f05f61adce �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 1 15:11:28.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5" in namespace "projected-506" to be "Succeeded or Failed" Jan 1 15:11:28.518: INFO: Pod "pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.115569ms Jan 1 15:11:30.522: INFO: Pod "pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007257435s Jan 1 15:11:32.526: INFO: Pod "pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011340775s �[1mSTEP�[0m: Saw pod success Jan 1 15:11:32.526: INFO: Pod "pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5" satisfied condition "Succeeded or Failed" Jan 1 15:11:32.529: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 pod pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:11:32.556: INFO: Waiting for pod pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5 to disappear Jan 1 15:11:32.558: INFO: Pod pod-projected-secrets-bcb58f62-b61b-4c16-baf1-e7db8787bea5 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:32.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-506" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":408,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:32.578: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 1 15:11:32.611: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092" in namespace "projected-4427" to be "Succeeded or Failed" Jan 1 15:11:32.615: INFO: Pod "downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092": Phase="Pending", Reason="", readiness=false. Elapsed: 3.485345ms Jan 1 15:11:34.619: INFO: Pod "downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007695103s Jan 1 15:11:36.624: INFO: Pod "downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012580542s �[1mSTEP�[0m: Saw pod success Jan 1 15:11:36.624: INFO: Pod "downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092" satisfied condition "Succeeded or Failed" Jan 1 15:11:36.627: INFO: Trying to get logs from node k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb pod downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 1 15:11:36.647: INFO: Waiting for pod downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092 to disappear Jan 1 15:11:36.651: INFO: Pod downwardapi-volume-2496ec2e-1dfe-494a-9698-19ea1816c092 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:36.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4427" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":412,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:36.685: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 1 15:11:36.729: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 1 15:11:36.734: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 1 15:11:36.746: INFO: waiting for watch events with expected annotations Jan 1 15:11:36.746: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:36.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-6582" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":26,"skipped":424,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:36.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 1 15:11:37.676: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 1 15:11:37.685: INFO: waiting for watch events with expected annotations Jan 1 15:11:37.685: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:37.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-7513" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":27,"skipped":437,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:11:37.840: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Service �[1mSTEP�[0m: Creating a NodePort Service �[1mSTEP�[0m: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota �[1mSTEP�[0m: Ensuring resource quota status captures service creation �[1mSTEP�[0m: Deleting Services �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 1 15:11:48.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4114" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":28,"skipped":512,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 1 15:01:02.235: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-7476 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 1 15:01:02.258: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 1 15:01:02.312: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 1 15:01:04.316: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:06.320: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:08.316: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:10.316: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:12.316: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:14.318: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:16.318: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:18.317: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:20.318: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:22.316: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 1 15:01:24.324: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 1 15:01:24.337: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 1 15:01:24.343: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 1 15:01:24.354: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 1 15:01:26.389: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 1 15:01:26.389: INFO: Breadth first check of 192.168.0.74 on host 172.18.0.4... Jan 1 15:01:26.395: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:01:26.396: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:01:26.397: INFO: ExecWithOptions: Clientset creation Jan 1 15:01:26.397: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:01:31.516: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:01:33.517: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-0: Jan 1 15:01:33.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-0 --namespace=pod-network-test-7476' Jan 1 15:01:33.597: INFO: stderr: "" Jan 1 15:01:33.597: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58/172.18.0.4\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.74\nIPs:\n IP: 192.168.0.74\nContainers:\n webserver:\n Container ID: containerd://c2dabcff5af1c3caabe68df961991623091914cfe8ea09b719b203ce5eb778c3\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d85th (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-d85th:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-0 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 30s kubelet Started container webserver\n" Jan 1 15:01:33.597: INFO: Name: netserver-0 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58/172.18.0.4 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.0.74 IPs: IP: 192.168.0.74 Containers: webserver: Container ID: containerd://c2dabcff5af1c3caabe68df961991623091914cfe8ea09b719b203ce5eb778c3 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d85th (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-d85th: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-0 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 30s kubelet Started container webserver Jan 1 15:01:33.597: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-1: Jan 1 15:01:33.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-1 --namespace=pod-network-test-7476' Jan 1 15:01:33.682: INFO: stderr: "" Jan 1 15:01:33.682: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb/172.18.0.7\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.65\nIPs:\n IP: 192.168.1.65\nContainers:\n webserver:\n Container ID: containerd://00b0765f282e9a8a56e023651404137ec0166bcbfa8fbde9f34ba93b19f7ba5d\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96cq6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-96cq6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-1 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 30s kubelet Started container webserver\n" Jan 1 15:01:33.682: INFO: Name: netserver-1 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb/172.18.0.7 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.1.65 IPs: IP: 192.168.1.65 Containers: webserver: Container ID: containerd://00b0765f282e9a8a56e023651404137ec0166bcbfa8fbde9f34ba93b19f7ba5d Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96cq6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-96cq6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-1 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 30s kubelet Started container webserver Jan 1 15:01:33.682: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-2: Jan 1 15:01:33.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-2 --namespace=pod-network-test-7476' Jan 1 15:01:33.765: INFO: stderr: "" Jan 1 15:01:33.765: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-worker-9emfga/172.18.0.5\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.79\nIPs:\n IP: 192.168.6.79\nContainers:\n webserver:\n Container ID: containerd://048c7bb3621288511162ead2ce0ff8bdbb95014a3c652ad5d2b527dda91798ba\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frbhm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-frbhm:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-9emfga\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-2 to k8s-upgrade-and-conformance-upqhfa-worker-9emfga\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 30s kubelet Started container webserver\n" Jan 1 15:01:33.765: INFO: Name: netserver-2 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-worker-9emfga/172.18.0.5 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.6.79 IPs: IP: 192.168.6.79 Containers: webserver: Container ID: containerd://048c7bb3621288511162ead2ce0ff8bdbb95014a3c652ad5d2b527dda91798ba Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frbhm (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-frbhm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-9emfga Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-2 to k8s-upgrade-and-conformance-upqhfa-worker-9emfga Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 30s kubelet Started container webserver Jan 1 15:01:33.765: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-3: Jan 1 15:01:33.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-3 --namespace=pod-network-test-7476' Jan 1 15:01:33.843: INFO: stderr: "" Jan 1 15:01:33.843: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-worker-zwqnic/172.18.0.6\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.3.78\nIPs:\n IP: 192.168.3.78\nContainers:\n webserver:\n Container ID: containerd://391223c2a40495674d2ebaeb4e77beb481dcb1a0532a2742c0e30b4d95cfb884\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68szh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-68szh:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-zwqnic\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-3 to k8s-upgrade-and-conformance-upqhfa-worker-zwqnic\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 30s kubelet Started container webserver\n" Jan 1 15:01:33.843: INFO: Name: netserver-3 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-worker-zwqnic/172.18.0.6 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.3.78 IPs: IP: 192.168.3.78 Containers: webserver: Container ID: containerd://391223c2a40495674d2ebaeb4e77beb481dcb1a0532a2742c0e30b4d95cfb884 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68szh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-68szh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-zwqnic Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7476/netserver-3 to k8s-upgrade-and-conformance-upqhfa-worker-zwqnic Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 30s kubelet Started container webserver Jan 1 15:01:33.843: INFO: encountered error during dial (did not find expected responses... Tries 1 Command curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1' retrieved map[] expected map[netserver-0:{}]) Jan 1 15:01:33.843: INFO: ...failed...will try again in next pass Jan 1 15:01:33.843: INFO: Breadth first check of 192.168.1.65 on host 172.18.0.7... Jan 1 15:01:33.846: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:01:33.847: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:01:33.847: INFO: ExecWithOptions: Clientset creation Jan 1 15:01:33.847: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:01:38.927: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:01:40.928: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-0: Jan 1 15:01:40.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-0 --namespace=pod-network-test-7476' Jan 1 15:01:41.004: INFO: stderr: "" Jan 1 15:01:41.004: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58/172.18.0.4\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.74\nIPs:\n IP: 192.168.0.74\nContainers:\n webserver:\n Container ID: containerd://c2dabcff5af1c3caabe68df961991623091914cfe8ea09b719b203ce5eb778c3\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d85th (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-d85th:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-0 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58\n Normal Pulled 39s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 39s kubelet Created container webserver\n Normal Started 38s kubelet Started container webserver\n" Jan 1 15:01:41.005: INFO: Name: netserver-0 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58/172.18.0.4 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.0.74 IPs: IP: 192.168.0.74 Containers: webserver: Container ID: containerd://c2dabcff5af1c3caabe68df961991623091914cfe8ea09b719b203ce5eb778c3 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d85th (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-d85th: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-0 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 Normal Pulled 39s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 39s kubelet Created container webserver Normal Started 38s kubelet Started container webserver Jan 1 15:01:41.005: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-1: Jan 1 15:01:41.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-1 --namespace=pod-network-test-7476' Jan 1 15:01:41.084: INFO: stderr: "" Jan 1 15:01:41.085: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb/172.18.0.7\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.65\nIPs:\n IP: 192.168.1.65\nContainers:\n webserver:\n Container ID: containerd://00b0765f282e9a8a56e023651404137ec0166bcbfa8fbde9f34ba93b19f7ba5d\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96cq6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-96cq6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-1 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb\n Normal Pulled 39s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 39s kubelet Created container webserver\n Normal Started 38s kubelet Started container webserver\n" Jan 1 15:01:41.085: INFO: Name: netserver-1 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb/172.18.0.7 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.1.65 IPs: IP: 192.168.1.65 Containers: webserver: Container ID: containerd://00b0765f282e9a8a56e023651404137ec0166bcbfa8fbde9f34ba93b19f7ba5d Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96cq6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-96cq6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-1 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb Normal Pulled 39s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 39s kubelet Created container webserver Normal Started 38s kubelet Started container webserver Jan 1 15:01:41.085: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-2: Jan 1 15:01:41.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-2 --namespace=pod-network-test-7476' Jan 1 15:01:41.163: INFO: stderr: "" Jan 1 15:01:41.163: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-worker-9emfga/172.18.0.5\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.79\nIPs:\n IP: 192.168.6.79\nContainers:\n webserver:\n Container ID: containerd://048c7bb3621288511162ead2ce0ff8bdbb95014a3c652ad5d2b527dda91798ba\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frbhm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-frbhm:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-9emfga\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-2 to k8s-upgrade-and-conformance-upqhfa-worker-9emfga\n Normal Pulled 39s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 39s kubelet Created container webserver\n Normal Started 38s kubelet Started container webserver\n" Jan 1 15:01:41.163: INFO: Name: netserver-2 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-worker-9emfga/172.18.0.5 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.6.79 IPs: IP: 192.168.6.79 Containers: webserver: Container ID: containerd://048c7bb3621288511162ead2ce0ff8bdbb95014a3c652ad5d2b527dda91798ba Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frbhm (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-frbhm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-9emfga Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-2 to k8s-upgrade-and-conformance-upqhfa-worker-9emfga Normal Pulled 39s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 39s kubelet Created container webserver Normal Started 38s kubelet Started container webserver Jan 1 15:01:41.163: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-3: Jan 1 15:01:41.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-3 --namespace=pod-network-test-7476' Jan 1 15:01:41.245: INFO: stderr: "" Jan 1 15:01:41.245: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-worker-zwqnic/172.18.0.6\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.3.78\nIPs:\n IP: 192.168.3.78\nContainers:\n webserver:\n Container ID: containerd://391223c2a40495674d2ebaeb4e77beb481dcb1a0532a2742c0e30b4d95cfb884\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68szh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-68szh:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-zwqnic\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-3 to k8s-upgrade-and-conformance-upqhfa-worker-zwqnic\n Normal Pulled 39s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 39s kubelet Created container webserver\n Normal Started 38s kubelet Started container webserver\n" Jan 1 15:01:41.245: INFO: Name: netserver-3 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-worker-zwqnic/172.18.0.6 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.3.78 IPs: IP: 192.168.3.78 Containers: webserver: Container ID: containerd://391223c2a40495674d2ebaeb4e77beb481dcb1a0532a2742c0e30b4d95cfb884 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68szh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-68szh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-zwqnic Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 39s default-scheduler Successfully assigned pod-network-test-7476/netserver-3 to k8s-upgrade-and-conformance-upqhfa-worker-zwqnic Normal Pulled 39s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 39s kubelet Created container webserver Normal Started 38s kubelet Started container webserver Jan 1 15:01:41.245: INFO: encountered error during dial (did not find expected responses... Tries 1 Command curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1' retrieved map[] expected map[netserver-1:{}]) Jan 1 15:01:41.245: INFO: ...failed...will try again in next pass Jan 1 15:01:41.245: INFO: Breadth first check of 192.168.6.79 on host 172.18.0.5... Jan 1 15:01:41.248: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.6.79&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:01:41.248: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:01:41.249: INFO: ExecWithOptions: Clientset creation Jan 1 15:01:41.249: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.6.79%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:01:41.334: INFO: Waiting for responses: map[] Jan 1 15:01:41.334: INFO: reached 192.168.6.79 after 0/1 tries Jan 1 15:01:41.334: INFO: Breadth first check of 192.168.3.78 on host 172.18.0.6... Jan 1 15:01:41.337: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.3.78&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:01:41.337: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:01:41.338: INFO: ExecWithOptions: Clientset creation Jan 1 15:01:41.338: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.3.78%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:01:41.409: INFO: Waiting for responses: map[] Jan 1 15:01:41.409: INFO: reached 192.168.3.78 after 0/1 tries Jan 1 15:01:41.409: INFO: Going to retry 2 out of 4 pods.... Jan 1 15:01:41.409: INFO: Doublechecking 1 pods in host 172.18.0.4 which weren't seen the first time. Jan 1 15:01:41.409: INFO: Now attempting to probe pod [[[ 192.168.0.74 ]]] Jan 1 15:01:41.413: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:01:41.413: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:01:41.413: INFO: ExecWithOptions: Clientset creation Jan 1 15:01:41.414: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:01:46.490: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:01:48.495: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:01:48.495: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:01:48.496: INFO: ExecWithOptions: Clientset creation Jan 1 15:01:48.496: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:01:53.568: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:01:55.572: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:01:55.572: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:01:55.573: INFO: ExecWithOptions: Clientset creation Jan 1 15:01:55.573: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:00.632: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:02.636: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:02.637: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:02.637: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:02.637: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:07.712: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:09.717: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:09.717: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:09.718: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:09.718: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:14.799: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:16.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:16.805: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:16.806: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:16.806: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:21.893: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:23.896: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:23.897: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:23.897: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:23.897: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:28.988: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:30.993: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:30.993: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:30.994: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:30.994: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:36.089: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:38.094: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:38.094: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:38.095: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:38.095: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:43.174: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:45.178: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:45.178: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:45.179: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:45.179: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:50.253: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:52.258: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:52.258: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:52.259: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:52.259: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:02:57.344: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:02:59.348: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:02:59.348: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:02:59.349: INFO: ExecWithOptions: Clientset creation Jan 1 15:02:59.349: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:04.441: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:06.447: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:06.447: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:06.448: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:06.448: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:11.527: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:13.532: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:13.532: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:13.532: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:13.533: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:18.612: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:20.617: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:20.617: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:20.618: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:20.618: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:25.700: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:27.704: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:27.704: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:27.705: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:27.705: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:32.763: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:34.768: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:34.768: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:34.769: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:34.769: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:39.854: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:41.859: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:41.859: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:41.860: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:41.860: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:46.946: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:48.951: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:48.951: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:48.952: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:48.952: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:03:54.036: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:03:56.043: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:03:56.043: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:03:56.044: INFO: ExecWithOptions: Clientset creation Jan 1 15:03:56.044: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:01.143: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:03.147: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:03.147: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:03.147: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:03.147: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:08.235: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:10.239: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:10.239: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:10.239: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:10.239: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:15.323: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:17.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:17.326: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:17.327: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:17.327: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:22.416: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:24.422: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:24.422: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:24.423: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:24.423: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:29.539: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:31.544: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:31.544: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:31.545: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:31.545: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:36.641: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:38.645: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:38.645: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:38.646: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:38.646: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:43.737: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:45.742: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:45.742: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:45.743: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:45.743: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:50.821: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:52.825: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:52.825: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:52.826: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:52.827: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:04:57.905: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:04:59.910: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:04:59.910: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:04:59.911: INFO: ExecWithOptions: Clientset creation Jan 1 15:04:59.911: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:04.989: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:06.994: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:06.994: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:06.995: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:06.995: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:12.072: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:14.076: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:14.076: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:14.077: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:14.077: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:19.176: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:21.180: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:21.180: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:21.181: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:21.181: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:26.265: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:28.271: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:28.271: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:28.272: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:28.272: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:33.375: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:35.380: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:35.380: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:35.381: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:35.381: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:40.465: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:42.469: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:42.469: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:42.470: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:42.470: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:47.566: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:49.571: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:49.571: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:49.572: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:49.572: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:05:54.656: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:05:56.661: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:05:56.661: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:05:56.662: INFO: ExecWithOptions: Clientset creation Jan 1 15:05:56.662: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:01.757: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:03.761: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:03.761: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:03.762: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:03.762: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:08.852: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:10.857: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:10.857: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:10.858: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:10.858: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:15.950: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:17.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:17.960: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:17.961: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:17.961: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:23.078: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:25.083: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:25.083: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:25.084: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:25.084: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:30.164: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:32.171: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:32.171: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:32.172: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:32.173: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:37.322: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:39.328: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:39.328: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:39.329: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:39.329: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:44.454: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:46.460: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:46.460: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:46.463: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:46.463: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:51.593: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:06:53.600: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:06:53.600: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:06:53.601: INFO: ExecWithOptions: Clientset creation Jan 1 15:06:53.601: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:06:58.767: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:07:00.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:00.774: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:00.775: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:00.775: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.74%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:05.913: INFO: Waiting for responses: map[netserver-0:{}] Jan 1 15:07:07.914: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-0: Jan 1 15:07:07.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-0 --namespace=pod-network-test-7476' Jan 1 15:07:08.077: INFO: stderr: "" Jan 1 15:07:08.077: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58/172.18.0.4\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.74\nIPs:\n IP: 192.168.0.74\nContainers:\n webserver:\n Container ID: containerd://c2dabcff5af1c3caabe68df961991623091914cfe8ea09b719b203ce5eb778c3\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d85th (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-d85th:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-0 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58\n Normal Pulled 6m6s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m6s kubelet Created container webserver\n Normal Started 6m5s kubelet Started container webserver\n" Jan 1 15:07:08.078: INFO: Name: netserver-0 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58/172.18.0.4 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.0.74 IPs: IP: 192.168.0.74 Containers: webserver: Container ID: containerd://c2dabcff5af1c3caabe68df961991623091914cfe8ea09b719b203ce5eb778c3 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-d85th (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-d85th: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-0 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-2vt58 Normal Pulled 6m6s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m6s kubelet Created container webserver Normal Started 6m5s kubelet Started container webserver Jan 1 15:07:08.078: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-1: Jan 1 15:07:08.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-1 --namespace=pod-network-test-7476' Jan 1 15:07:08.242: INFO: stderr: "" Jan 1 15:07:08.242: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb/172.18.0.7\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.65\nIPs:\n IP: 192.168.1.65\nContainers:\n webserver:\n Container ID: containerd://00b0765f282e9a8a56e023651404137ec0166bcbfa8fbde9f34ba93b19f7ba5d\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96cq6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-96cq6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-1 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb\n Normal Pulled 6m6s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m6s kubelet Created container webserver\n Normal Started 6m5s kubelet Started container webserver\n" Jan 1 15:07:08.242: INFO: Name: netserver-1 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb/172.18.0.7 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.1.65 IPs: IP: 192.168.1.65 Containers: webserver: Container ID: containerd://00b0765f282e9a8a56e023651404137ec0166bcbfa8fbde9f34ba93b19f7ba5d Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-96cq6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-96cq6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-1 to k8s-upgrade-and-conformance-upqhfa-md-0-6prb7-5768f6855b-64ksb Normal Pulled 6m6s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m6s kubelet Created container webserver Normal Started 6m5s kubelet Started container webserver Jan 1 15:07:08.242: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-2: Jan 1 15:07:08.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-2 --namespace=pod-network-test-7476' Jan 1 15:07:08.398: INFO: stderr: "" Jan 1 15:07:08.398: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-worker-9emfga/172.18.0.5\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.79\nIPs:\n IP: 192.168.6.79\nContainers:\n webserver:\n Container ID: containerd://048c7bb3621288511162ead2ce0ff8bdbb95014a3c652ad5d2b527dda91798ba\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frbhm (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-frbhm:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-9emfga\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-2 to k8s-upgrade-and-conformance-upqhfa-worker-9emfga\n Normal Pulled 6m6s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m6s kubelet Created container webserver\n Normal Started 6m5s kubelet Started container webserver\n" Jan 1 15:07:08.398: INFO: Name: netserver-2 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-worker-9emfga/172.18.0.5 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.6.79 IPs: IP: 192.168.6.79 Containers: webserver: Container ID: containerd://048c7bb3621288511162ead2ce0ff8bdbb95014a3c652ad5d2b527dda91798ba Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-frbhm (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-frbhm: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-9emfga Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-2 to k8s-upgrade-and-conformance-upqhfa-worker-9emfga Normal Pulled 6m6s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m6s kubelet Created container webserver Normal Started 6m5s kubelet Started container webserver Jan 1 15:07:08.398: INFO: Output of kubectl describe pod pod-network-test-7476/netserver-3: Jan 1 15:07:08.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7476 describe pod netserver-3 --namespace=pod-network-test-7476' Jan 1 15:07:08.565: INFO: stderr: "" Jan 1 15:07:08.566: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-7476\nPriority: 0\nNode: k8s-upgrade-and-conformance-upqhfa-worker-zwqnic/172.18.0.6\nStart Time: Sun, 01 Jan 2023 15:01:02 +0000\nLabels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.3.78\nIPs:\n IP: 192.168.3.78\nContainers:\n webserver:\n Container ID: containerd://391223c2a40495674d2ebaeb4e77beb481dcb1a0532a2742c0e30b4d95cfb884\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sun, 01 Jan 2023 15:01:03 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68szh (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-68szh:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-zwqnic\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-3 to k8s-upgrade-and-conformance-upqhfa-worker-zwqnic\n Normal Pulled 6m6s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m6s kubelet Created container webserver\n Normal Started 6m5s kubelet Started container webserver\n" Jan 1 15:07:08.566: INFO: Name: netserver-3 Namespace: pod-network-test-7476 Priority: 0 Node: k8s-upgrade-and-conformance-upqhfa-worker-zwqnic/172.18.0.6 Start Time: Sun, 01 Jan 2023 15:01:02 +0000 Labels: selector-8a4b7833-88fe-4b50-ad33-98447d728b72=true Annotations: <none> Status: Running IP: 192.168.3.78 IPs: IP: 192.168.3.78 Containers: webserver: Container ID: containerd://391223c2a40495674d2ebaeb4e77beb481dcb1a0532a2742c0e30b4d95cfb884 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Sun, 01 Jan 2023 15:01:03 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-68szh (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-68szh: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-upqhfa-worker-zwqnic Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m6s default-scheduler Successfully assigned pod-network-test-7476/netserver-3 to k8s-upgrade-and-conformance-upqhfa-worker-zwqnic Normal Pulled 6m6s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m6s kubelet Created container webserver Normal Started 6m5s kubelet Started container webserver Jan 1 15:07:08.566: INFO: encountered error during dial (did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.0.74&port=8083&tries=1' retrieved map[] expected map[netserver-0:{}]) Jan 1 15:07:08.566: INFO: ... Done probing pod [[[ 192.168.0.74 ]]] Jan 1 15:07:08.566: INFO: succeeded at polling 3 out of 4 connections Jan 1 15:07:08.566: INFO: Doublechecking 1 pods in host 172.18.0.7 which weren't seen the first time. Jan 1 15:07:08.566: INFO: Now attempting to probe pod [[[ 192.168.1.65 ]]] Jan 1 15:07:08.573: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:08.574: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:08.575: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:08.575: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:13.709: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:07:15.713: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:15.713: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:15.713: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:15.714: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:20.828: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:07:22.836: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:22.836: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:22.838: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:22.838: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:27.967: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:07:29.974: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:29.974: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:29.975: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:29.976: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:35.117: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:07:37.124: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:37.124: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:37.125: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:37.125: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:42.271: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:07:44.279: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:44.279: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:44.280: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:44.280: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:49.463: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:07:51.470: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:51.470: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:51.471: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:51.471: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:07:56.620: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:07:58.629: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:07:58.629: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:07:58.630: INFO: ExecWithOptions: Clientset creation Jan 1 15:07:58.631: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:08:03.803: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:08:05.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:08:05.809: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:08:05.811: INFO: ExecWithOptions: Clientset creation Jan 1 15:08:05.811: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7476/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.3.80%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.65%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 1 15:08:10.956: INFO: Waiting for responses: map[netserver-1:{}] Jan 1 15:08:12.960: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.3.80:9080/dial?request=hostname&protocol=http&host=192.168.1.65&port=8083&tries=1'] Namespace:pod-network-test-7476 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 1 15:08:12.960: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 1 15:08:12.962: INFO: ExecWithOptions: Clientset creation Jan 1 15:08:12.962: INFO: ExecWithOptions: execute(POST https://