Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h0m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc000620828>: { error: <*errors.withMessage | 0xc0007aa660>{ cause: <*errors.errorString | 0xc0007b29c0>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1ad2fea, 0x1b134a8, 0x73c2fa, 0x73bcc5, 0x73b3bb, 0x741149, 0x740b27, 0x761fe5, 0x761d05, 0x761545, 0x7637f2, 0x76f9a5, 0x76f7be, 0x1b2de51, 0x5156c2, 0x46b2c1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-9831xy INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-9831xy" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-3a12zq" using the "upgrades" template (Kubernetes v1.22.8, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-3a12zq --infrastructure (default) --kubernetes-version v1.22.8 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster configmap/cni-k8s-upgrade-and-conformance-3a12zq-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-mp-0-config created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-md-0 created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-md-0 created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-mp-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-control-plane created dockercluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-dmp-0 created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-3a12zq-md-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-9831xy/k8s-upgrade-and-conformance-3a12zq-control-plane to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-9831xy/k8s-upgrade-and-conformance-3a12zq-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Kubernetes control-plane INFO: Patching the new kubernetes version to KCP INFO: Waiting for control-plane machines to have the upgraded kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.5 INFO: Waiting for kube-proxy to have the upgraded kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag STEP: Upgrading the machine deployment INFO: Patching the new kubernetes version to Machine Deployment k8s-upgrade-and-conformance-9831xy/k8s-upgrade-and-conformance-3a12zq-md-0 INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-9831xy/k8s-upgrade-and-conformance-3a12zq-md-0 to be upgraded from v1.22.8 to v1.23.5 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.5 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-9831xy/k8s-upgrade-and-conformance-3a12zq-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-9831xy/k8s-upgrade-and-conformance-3a12zq-mp-0 to be upgraded from v1.22.8 to v1.23.5 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.5 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1650116193�[0m - Will randomize all specs Will run �[1m7042�[0m specs Running in parallel across �[1m4�[0m nodes Apr 16 13:36:37.296: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:36:37.299: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 16 13:36:37.312: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 16 13:36:37.341: INFO: 16 / 16 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 16 13:36:37.341: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 16 13:36:37.341: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 16 13:36:37.346: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 16 13:36:37.346: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 16 13:36:37.346: INFO: e2e test version: v1.23.5 Apr 16 13:36:37.347: INFO: kube-apiserver version: v1.23.5 Apr 16 13:36:37.348: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:36:37.352: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 16 13:36:37.351: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:36:37.363: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 16 13:36:37.372: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:36:37.390: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 16 13:36:37.397: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:36:37.413: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:37.398: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc W0416 13:36:37.443312 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 16 13:36:37.443: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 pods, got 1 pods �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: Gathering metrics Apr 16 13:36:38.045: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb is Running (Ready = true) Apr 16 13:36:38.166: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:38.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1165" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:37.376: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap W0416 13:36:37.431157 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 16 13:36:37.431: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-775df8b4-aa73-4c7a-8d73-4eee4ea588c2 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:36:37.448: INFO: Waiting up to 5m0s for pod "pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d" in namespace "configmap-7503" to be "Succeeded or Failed" Apr 16 13:36:37.452: INFO: Pod "pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.41882ms Apr 16 13:36:39.457: INFO: Pod "pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009703342s Apr 16 13:36:41.463: INFO: Pod "pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015136192s Apr 16 13:36:43.467: INFO: Pod "pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019814419s �[1mSTEP�[0m: Saw pod success Apr 16 13:36:43.468: INFO: Pod "pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d" satisfied condition "Succeeded or Failed" Apr 16 13:36:43.470: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:36:43.494: INFO: Waiting for pod pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d to disappear Apr 16 13:36:43.497: INFO: Pod pod-configmaps-f05df7fa-ba2c-44d3-8c91-68fbdeda6b4d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:43.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7503" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:37.426: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api W0416 13:36:37.460021 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 16 13:36:37.460: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:36:37.479: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6" in namespace "downward-api-5679" to be "Succeeded or Failed" Apr 16 13:36:37.484: INFO: Pod "downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.908734ms Apr 16 13:36:39.489: INFO: Pod "downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009771056s Apr 16 13:36:41.495: INFO: Pod "downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016146128s Apr 16 13:36:43.499: INFO: Pod "downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020556119s �[1mSTEP�[0m: Saw pod success Apr 16 13:36:43.499: INFO: Pod "downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6" satisfied condition "Succeeded or Failed" Apr 16 13:36:43.503: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:36:43.530: INFO: Waiting for pod downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6 to disappear Apr 16 13:36:43.532: INFO: Pod downwardapi-volume-f2d846e2-ab65-4579-8c64-a72b69a494e6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:43.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5679" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":5,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:37.524: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook W0416 13:36:37.551504 18 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 16 13:36:37.551: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:36:37.976: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 13:36:39.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:36:41.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 37, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:36:45.016: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/admissionregistration.k8s.io/v1 discovery document �[1mSTEP�[0m: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:45.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9813" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9813-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":-1,"completed":1,"skipped":55,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:43.509: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-156a951e-84c2-48b9-9d91-3bb3ada96269 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-f731a4b6-445d-4996-b302-2c8f3d3b4ae0 �[1mSTEP�[0m: Creating the pod Apr 16 13:36:43.573: INFO: The status of Pod pod-projected-configmaps-b6443133-fbed-42d8-89c3-912f1960e09f is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:36:45.579: INFO: The status of Pod pod-projected-configmaps-b6443133-fbed-42d8-89c3-912f1960e09f is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-156a951e-84c2-48b9-9d91-3bb3ada96269 �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-f731a4b6-445d-4996-b302-2c8f3d3b4ae0 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-7cecdf26-9e5b-4f91-87bf-5c1b49109217 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:47.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4234" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":12,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:45.128: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 16 13:36:48.202: INFO: Expected: &{} to match Container's Termination Message: -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:48.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-1674" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":58,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:43.548: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:36:43.580: INFO: Creating simple deployment test-new-deployment Apr 16 13:36:43.600: INFO: deployment "test-new-deployment" doesn't have the required revision set Apr 16 13:36:45.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:36:47.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 36, 43, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the deployment Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 16 13:36:49.646: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-6895 1cf730b4-33df-4e07-8e68-7607653e5e4a 2718 3 2022-04-16 13:36:43 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2022-04-16 13:36:43 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:36:47 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002d44a98 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-16 13:36:47 +0000 UTC,LastTransitionTime:2022-04-16 13:36:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2022-04-16 13:36:47 +0000 UTC,LastTransitionTime:2022-04-16 13:36:43 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 16 13:36:49.653: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-6895 2a4623ac-ffdd-4463-adf0-60da704d4ab0 2723 2 2022-04-16 13:36:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 1cf730b4-33df-4e07-8e68-7607653e5e4a 0xc002c58667 0xc002c58668}] [] [{kube-controller-manager Update apps/v1 2022-04-16 13:36:43 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1cf730b4-33df-4e07-8e68-7607653e5e4a\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:36:47 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002c586f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:36:49.659: INFO: Pod "test-new-deployment-5d9fdcc779-kj66p" is available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-kj66p test-new-deployment-5d9fdcc779- deployment-6895 46282248-d30b-4bf7-a249-4870c0eb7f55 2665 0 2022-04-16 13:36:43 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 2a4623ac-ffdd-4463-adf0-60da704d4ab0 0xc002d44ed0 0xc002d44ed1}] [] [{kube-controller-manager Update v1 2022-04-16 13:36:43 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4623ac-ffdd-4463-adf0-60da704d4ab0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:36:47 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.3\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b9dpr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b9dpr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:36:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:36:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:36:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:36:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.3,StartTime:2022-04-16 13:36:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:36:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://c27fbf2660fef47db8e2046131530a63383e21736b2e2bf5611013b70b8a0084,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:36:49.659: INFO: Pod "test-new-deployment-5d9fdcc779-z22h7" is not available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-z22h7 test-new-deployment-5d9fdcc779- deployment-6895 41ef52ee-d135-424a-af56-e68ebd7a904d 2722 0 2022-04-16 13:36:49 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 2a4623ac-ffdd-4463-adf0-60da704d4ab0 0xc002d450b0 0xc002d450b1}] [] [{kube-controller-manager Update v1 2022-04-16 13:36:49 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2a4623ac-ffdd-4463-adf0-60da704d4ab0\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4f5dm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4f5dm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:36:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:49.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6895" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":2,"skipped":8,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:48.247: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 16 13:36:50.296: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:50.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-2065" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":73,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:49.744: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Apr 16 13:36:49.783: INFO: Waiting up to 5m0s for pod "pod-7a9caae1-50f6-4d6b-aa23-f1bfd1f21dda" in namespace "emptydir-6177" to be "Succeeded or Failed" Apr 16 13:36:49.789: INFO: Pod "pod-7a9caae1-50f6-4d6b-aa23-f1bfd1f21dda": Phase="Pending", Reason="", readiness=false. Elapsed: 5.836082ms Apr 16 13:36:51.794: INFO: Pod "pod-7a9caae1-50f6-4d6b-aa23-f1bfd1f21dda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011160232s �[1mSTEP�[0m: Saw pod success Apr 16 13:36:51.794: INFO: Pod "pod-7a9caae1-50f6-4d6b-aa23-f1bfd1f21dda" satisfied condition "Succeeded or Failed" Apr 16 13:36:51.797: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-7a9caae1-50f6-4d6b-aa23-f1bfd1f21dda container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:36:51.812: INFO: Waiting for pod pod-7a9caae1-50f6-4d6b-aa23-f1bfd1f21dda to disappear Apr 16 13:36:51.815: INFO: Pod pod-7a9caae1-50f6-4d6b-aa23-f1bfd1f21dda no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:51.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-6177" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":55,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:51.850: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Updating PodDisruptionBudget status �[1mSTEP�[0m: Waiting for all pods to be running Apr 16 13:36:53.900: INFO: running pods: 0 < 1 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Patching PodDisruptionBudget status �[1mSTEP�[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:36:55.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7821" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":4,"skipped":70,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:38.193: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:36:38.239: INFO: created pod Apr 16 13:36:38.240: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-8551" to be "Succeeded or Failed" Apr 16 13:36:38.243: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.306036ms Apr 16 13:36:40.247: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007819712s Apr 16 13:36:42.253: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013253941s Apr 16 13:36:44.257: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017337226s �[1mSTEP�[0m: Saw pod success Apr 16 13:36:44.257: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Apr 16 13:37:14.257: INFO: polling logs Apr 16 13:37:14.282: INFO: Pod logs: 2022/04/16 13:36:42 OK: Got token 2022/04/16 13:36:42 validating with in-cluster discovery 2022/04/16 13:36:42 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/04/16 13:36:42 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-8551:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650116798, NotBefore:1650116198, IssuedAt:1650116198, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8551", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"fd516ad2-e4c6-48fe-bb4a-eeb34960afa6"}}} 2022/04/16 13:36:42 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/04/16 13:36:42 OK: Validated signature on JWT 2022/04/16 13:36:42 OK: Got valid claims from token! 2022/04/16 13:36:42 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-8551:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650116798, NotBefore:1650116198, IssuedAt:1650116198, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-8551", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"fd516ad2-e4c6-48fe-bb4a-eeb34960afa6"}}} Apr 16 13:37:14.282: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:14.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-8551" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":2,"skipped":20,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:14.331: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-09ad1517-3f22-4d78-80aa-af21d3fcad05 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:37:14.373: INFO: Waiting up to 5m0s for pod "pod-configmaps-b49ba4c7-6a78-43ad-ab7a-e39d1f17d4c8" in namespace "configmap-830" to be "Succeeded or Failed" Apr 16 13:37:14.376: INFO: Pod "pod-configmaps-b49ba4c7-6a78-43ad-ab7a-e39d1f17d4c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.555186ms Apr 16 13:37:16.380: INFO: Pod "pod-configmaps-b49ba4c7-6a78-43ad-ab7a-e39d1f17d4c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007875434s �[1mSTEP�[0m: Saw pod success Apr 16 13:37:16.381: INFO: Pod "pod-configmaps-b49ba4c7-6a78-43ad-ab7a-e39d1f17d4c8" satisfied condition "Succeeded or Failed" Apr 16 13:37:16.383: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-configmaps-b49ba4c7-6a78-43ad-ab7a-e39d1f17d4c8 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:37:16.398: INFO: Waiting for pod pod-configmaps-b49ba4c7-6a78-43ad-ab7a-e39d1f17d4c8 to disappear Apr 16 13:37:16.400: INFO: Pod pod-configmaps-b49ba4c7-6a78-43ad-ab7a-e39d1f17d4c8 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:16.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-830" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":45,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:16.414: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a Service �[1mSTEP�[0m: watching for the Service to be added Apr 16 13:37:16.464: INFO: Found Service test-service-ntqtj in namespace services-7033 with labels: map[test-service-static:true] & ports [{http TCP <nil> 80 {0 80 } 0}] Apr 16 13:37:16.464: INFO: Service test-service-ntqtj created �[1mSTEP�[0m: Getting /status Apr 16 13:37:16.469: INFO: Service test-service-ntqtj has LoadBalancer: {[]} �[1mSTEP�[0m: patching the ServiceStatus �[1mSTEP�[0m: watching for the Service to be patched Apr 16 13:37:16.479: INFO: observed Service test-service-ntqtj in namespace services-7033 with annotations: map[] & LoadBalancer: {[]} Apr 16 13:37:16.479: INFO: Found Service test-service-ntqtj in namespace services-7033 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Apr 16 13:37:16.479: INFO: Service test-service-ntqtj has service status patched �[1mSTEP�[0m: updating the ServiceStatus Apr 16 13:37:16.490: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Service to be updated Apr 16 13:37:16.493: INFO: Observed Service test-service-ntqtj in namespace services-7033 with annotations: map[] & Conditions: {[]} Apr 16 13:37:16.493: INFO: Observed event: &Service{ObjectMeta:{test-service-ntqtj services-7033 c5601fd6-7929-4755-a4d6-417e6c908dda 3018 0 2022-04-16 13:37:16 +0000 UTC <nil> <nil> map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-04-16 13:37:16 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2022-04-16 13:37:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.138.110.53,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.138.110.53],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Apr 16 13:37:16.493: INFO: Found Service test-service-ntqtj in namespace services-7033 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 16 13:37:16.493: INFO: Service test-service-ntqtj has service status updated �[1mSTEP�[0m: patching the service �[1mSTEP�[0m: watching for the Service to be patched Apr 16 13:37:16.509: INFO: observed Service test-service-ntqtj in namespace services-7033 with labels: map[test-service-static:true] Apr 16 13:37:16.510: INFO: observed Service test-service-ntqtj in namespace services-7033 with labels: map[test-service-static:true] Apr 16 13:37:16.510: INFO: observed Service test-service-ntqtj in namespace services-7033 with labels: map[test-service-static:true] Apr 16 13:37:16.510: INFO: Found Service test-service-ntqtj in namespace services-7033 with labels: map[test-service:patched test-service-static:true] Apr 16 13:37:16.510: INFO: Service test-service-ntqtj patched �[1mSTEP�[0m: deleting the service �[1mSTEP�[0m: watching for the Service to be deleted Apr 16 13:37:16.528: INFO: Observed event: ADDED Apr 16 13:37:16.528: INFO: Observed event: MODIFIED Apr 16 13:37:16.528: INFO: Observed event: MODIFIED Apr 16 13:37:16.528: INFO: Observed event: MODIFIED Apr 16 13:37:16.528: INFO: Found Service test-service-ntqtj in namespace services-7033 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Apr 16 13:37:16.529: INFO: Service test-service-ntqtj deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:16.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7033" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":4,"skipped":47,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:55.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-ppj6 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 16 13:36:55.999: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-ppj6" in namespace "subpath-866" to be "Succeeded or Failed" Apr 16 13:36:56.005: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.050997ms Apr 16 13:36:58.009: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 2.009204091s Apr 16 13:37:00.014: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 4.014102442s Apr 16 13:37:02.017: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 6.017810224s Apr 16 13:37:04.021: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 8.021595375s Apr 16 13:37:06.026: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 10.026183336s Apr 16 13:37:08.030: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 12.03007334s Apr 16 13:37:10.034: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 14.034240578s Apr 16 13:37:12.038: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 16.038439326s Apr 16 13:37:14.042: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 18.042998495s Apr 16 13:37:16.046: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Running", Reason="", readiness=true. Elapsed: 20.04666924s Apr 16 13:37:18.053: INFO: Pod "pod-subpath-test-downwardapi-ppj6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.053465663s �[1mSTEP�[0m: Saw pod success Apr 16 13:37:18.053: INFO: Pod "pod-subpath-test-downwardapi-ppj6" satisfied condition "Succeeded or Failed" Apr 16 13:37:18.056: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-subpath-test-downwardapi-ppj6 container test-container-subpath-downwardapi-ppj6: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:37:18.071: INFO: Waiting for pod pod-subpath-test-downwardapi-ppj6 to disappear Apr 16 13:37:18.074: INFO: Pod pod-subpath-test-downwardapi-ppj6 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-ppj6 Apr 16 13:37:18.074: INFO: Deleting pod "pod-subpath-test-downwardapi-ppj6" in namespace "subpath-866" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:18.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-866" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":5,"skipped":73,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:18.116: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:37:18.726: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:37:21.753: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:21.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-57" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-57-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":6,"skipped":90,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:22.010: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: validating cluster-info Apr 16 13:37:22.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6229 cluster-info' Apr 16 13:37:22.344: INFO: stderr: "" Apr 16 13:37:22.344: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:22.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6229" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":7,"skipped":96,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:22.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-e30506f5-0e17-4fb1-9c2d-73e7df093fc4 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:26.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8543" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":149,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:16.578: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8404.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-8404.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-8404.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-8404.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 16 13:37:26.677: INFO: DNS probes using dns-8404/dns-test-e224dddb-aae2-4311-a9e3-445685e6c94d succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:26.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-8404" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":5,"skipped":64,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:26.725: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 16 13:37:26.774: INFO: Waiting up to 5m0s for pod "security-context-19c96c53-1a2a-4ef5-bba8-4ed5f4be1582" in namespace "security-context-4229" to be "Succeeded or Failed" Apr 16 13:37:26.778: INFO: Pod "security-context-19c96c53-1a2a-4ef5-bba8-4ed5f4be1582": Phase="Pending", Reason="", readiness=false. Elapsed: 3.35857ms Apr 16 13:37:28.781: INFO: Pod "security-context-19c96c53-1a2a-4ef5-bba8-4ed5f4be1582": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007255338s �[1mSTEP�[0m: Saw pod success Apr 16 13:37:28.782: INFO: Pod "security-context-19c96c53-1a2a-4ef5-bba8-4ed5f4be1582" satisfied condition "Succeeded or Failed" Apr 16 13:37:28.784: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod security-context-19c96c53-1a2a-4ef5-bba8-4ed5f4be1582 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:37:28.799: INFO: Waiting for pod security-context-19c96c53-1a2a-4ef5-bba8-4ed5f4be1582 to disappear Apr 16 13:37:28.802: INFO: Pod security-context-19c96c53-1a2a-4ef5-bba8-4ed5f4be1582 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:28.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-4229" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":68,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:28.825: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 16 13:37:30.882: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:30.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-1963" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":74,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:31.058: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:37:31.093: INFO: Creating pod... Apr 16 13:37:31.107: INFO: Pod Quantity: 1 Status: Pending Apr 16 13:37:32.112: INFO: Pod Quantity: 1 Status: Pending Apr 16 13:37:33.111: INFO: Pod Status: Running Apr 16 13:37:33.111: INFO: Creating service... Apr 16 13:37:33.123: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/pods/agnhost/proxy/some/path/with/DELETE Apr 16 13:37:33.130: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 16 13:37:33.130: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/pods/agnhost/proxy/some/path/with/GET Apr 16 13:37:33.137: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 16 13:37:33.137: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/pods/agnhost/proxy/some/path/with/HEAD Apr 16 13:37:33.143: INFO: http.Client request:HEAD | StatusCode:200 Apr 16 13:37:33.143: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/pods/agnhost/proxy/some/path/with/OPTIONS Apr 16 13:37:33.150: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 16 13:37:33.150: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/pods/agnhost/proxy/some/path/with/PATCH Apr 16 13:37:33.154: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 16 13:37:33.154: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/pods/agnhost/proxy/some/path/with/POST Apr 16 13:37:33.157: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 16 13:37:33.157: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/pods/agnhost/proxy/some/path/with/PUT Apr 16 13:37:33.162: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Apr 16 13:37:33.163: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/services/test-service/proxy/some/path/with/DELETE Apr 16 13:37:33.172: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Apr 16 13:37:33.172: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/services/test-service/proxy/some/path/with/GET Apr 16 13:37:33.180: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Apr 16 13:37:33.180: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/services/test-service/proxy/some/path/with/HEAD Apr 16 13:37:33.186: INFO: http.Client request:HEAD | StatusCode:200 Apr 16 13:37:33.186: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/services/test-service/proxy/some/path/with/OPTIONS Apr 16 13:37:33.193: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Apr 16 13:37:33.193: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/services/test-service/proxy/some/path/with/PATCH Apr 16 13:37:33.205: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Apr 16 13:37:33.205: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/services/test-service/proxy/some/path/with/POST Apr 16 13:37:33.212: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Apr 16 13:37:33.212: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-4807/services/test-service/proxy/some/path/with/PUT Apr 16 13:37:33.225: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:33.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-4807" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":8,"skipped":149,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:47.702: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod busybox-e6bf4c3d-5d78-4de3-90aa-3621ac72ddfe in namespace container-probe-625 Apr 16 13:36:51.746: INFO: Started pod busybox-e6bf4c3d-5d78-4de3-90aa-3621ac72ddfe in namespace container-probe-625 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 16 13:36:51.749: INFO: Initial restart count of pod busybox-e6bf4c3d-5d78-4de3-90aa-3621ac72ddfe is 0 Apr 16 13:37:40.047: INFO: Restart count of pod container-probe-625/busybox-e6bf4c3d-5d78-4de3-90aa-3621ac72ddfe is now 1 (48.298092292s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:40.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-625" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":50,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:33.304: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-6306 [It] should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-6306 Apr 16 13:37:33.360: INFO: Found 0 stateful pods, waiting for 1 Apr 16 13:37:43.372: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource �[1mSTEP�[0m: verifying the statefulset Spec.Replicas was modified [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 16 13:37:43.398: INFO: Deleting all statefulset in ns statefulset-6306 Apr 16 13:37:43.402: INFO: Scaling statefulset ss to 0 Apr 16 13:37:53.422: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 13:37:53.425: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:53.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6306" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":-1,"completed":9,"skipped":183,"failed":0} [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:53.448: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-8804832a-55e1-4310-8c0f-35f60a6ec151 �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-dd99f5a5-0664-40f2-ada6-cd186a116b33 �[1mSTEP�[0m: Creating the pod Apr 16 13:37:53.496: INFO: The status of Pod pod-secrets-35465031-2aca-44bc-9043-4e20cb357c99 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:37:55.500: INFO: The status of Pod pod-secrets-35465031-2aca-44bc-9043-4e20cb357c99 is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-8804832a-55e1-4310-8c0f-35f60a6ec151 �[1mSTEP�[0m: Updating secret s-test-opt-upd-dd99f5a5-0664-40f2-ada6-cd186a116b33 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-57216bbb-81fd-464d-a8ca-ad5d7a08a030 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:37:59.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4193" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":183,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:59.623: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:37:59.666: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bef54b96-fa86-4420-817f-586500abb3a3" in namespace "projected-2381" to be "Succeeded or Failed" Apr 16 13:37:59.669: INFO: Pod "downwardapi-volume-bef54b96-fa86-4420-817f-586500abb3a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.872938ms Apr 16 13:38:01.677: INFO: Pod "downwardapi-volume-bef54b96-fa86-4420-817f-586500abb3a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011777555s �[1mSTEP�[0m: Saw pod success Apr 16 13:38:01.677: INFO: Pod "downwardapi-volume-bef54b96-fa86-4420-817f-586500abb3a3" satisfied condition "Succeeded or Failed" Apr 16 13:38:01.682: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod downwardapi-volume-bef54b96-fa86-4420-817f-586500abb3a3 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:38:01.699: INFO: Waiting for pod downwardapi-volume-bef54b96-fa86-4420-817f-586500abb3a3 to disappear Apr 16 13:38:01.702: INFO: Pod downwardapi-volume-bef54b96-fa86-4420-817f-586500abb3a3 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:01.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2381" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":214,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:40.071: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3457 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 16 13:37:40.103: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 16 13:37:40.141: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:37:42.145: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:37:44.145: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:37:46.145: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:37:48.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:37:50.152: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:37:52.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:37:54.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:37:56.146: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:37:58.144: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:38:00.145: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 16 13:38:00.151: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 16 13:38:00.160: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 16 13:38:00.168: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 16 13:38:02.188: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 16 13:38:02.188: INFO: Breadth first check of 192.168.0.5 on host 172.18.0.4... Apr 16 13:38:02.191: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.12:9080/dial?request=hostname&protocol=udp&host=192.168.0.5&port=8081&tries=1'] Namespace:pod-network-test-3457 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:38:02.191: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:38:02.191: INFO: ExecWithOptions: Clientset creation Apr 16 13:38:02.192: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3457/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.12%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.0.5%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:38:02.278: INFO: Waiting for responses: map[] Apr 16 13:38:02.278: INFO: reached 192.168.0.5 after 0/1 tries Apr 16 13:38:02.278: INFO: Breadth first check of 192.168.2.9 on host 172.18.0.7... Apr 16 13:38:02.282: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.12:9080/dial?request=hostname&protocol=udp&host=192.168.2.9&port=8081&tries=1'] Namespace:pod-network-test-3457 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:38:02.282: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:38:02.283: INFO: ExecWithOptions: Clientset creation Apr 16 13:38:02.283: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3457/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.12%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.2.9%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:38:02.372: INFO: Waiting for responses: map[] Apr 16 13:38:02.372: INFO: reached 192.168.2.9 after 0/1 tries Apr 16 13:38:02.372: INFO: Breadth first check of 192.168.3.7 on host 172.18.0.6... Apr 16 13:38:02.375: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.12:9080/dial?request=hostname&protocol=udp&host=192.168.3.7&port=8081&tries=1'] Namespace:pod-network-test-3457 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:38:02.375: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:38:02.376: INFO: ExecWithOptions: Clientset creation Apr 16 13:38:02.376: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3457/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.12%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.3.7%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:38:02.463: INFO: Waiting for responses: map[] Apr 16 13:38:02.463: INFO: reached 192.168.3.7 after 0/1 tries Apr 16 13:38:02.463: INFO: Breadth first check of 192.168.6.12 on host 172.18.0.5... Apr 16 13:38:02.466: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.12:9080/dial?request=hostname&protocol=udp&host=192.168.6.12&port=8081&tries=1'] Namespace:pod-network-test-3457 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:38:02.466: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:38:02.467: INFO: ExecWithOptions: Clientset creation Apr 16 13:38:02.467: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3457/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.12%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.6.12%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:38:02.547: INFO: Waiting for responses: map[] Apr 16 13:38:02.547: INFO: reached 192.168.6.12 after 0/1 tries Apr 16 13:38:02.547: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:02.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-3457" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":53,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:01.714: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-4641416a-7763-46a2-8ee2-e81364c45181 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:38:01.751: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb5fb2ba-c5ee-4963-9be6-5490a4eadbf6" in namespace "projected-7997" to be "Succeeded or Failed" Apr 16 13:38:01.755: INFO: Pod "pod-projected-secrets-bb5fb2ba-c5ee-4963-9be6-5490a4eadbf6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.057219ms Apr 16 13:38:03.759: INFO: Pod "pod-projected-secrets-bb5fb2ba-c5ee-4963-9be6-5490a4eadbf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007317109s �[1mSTEP�[0m: Saw pod success Apr 16 13:38:03.760: INFO: Pod "pod-projected-secrets-bb5fb2ba-c5ee-4963-9be6-5490a4eadbf6" satisfied condition "Succeeded or Failed" Apr 16 13:38:03.762: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-projected-secrets-bb5fb2ba-c5ee-4963-9be6-5490a4eadbf6 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:38:03.785: INFO: Waiting for pod pod-projected-secrets-bb5fb2ba-c5ee-4963-9be6-5490a4eadbf6 to disappear Apr 16 13:38:03.788: INFO: Pod pod-projected-secrets-bb5fb2ba-c5ee-4963-9be6-5490a4eadbf6 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:03.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7997" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":215,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:02.604: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-6efe076d-ffc2-4e35-87d8-e2422de56681 �[1mSTEP�[0m: Creating the pod Apr 16 13:38:02.650: INFO: The status of Pod pod-projected-configmaps-a2ed4c15-71ad-4093-af30-d2d23d1a6c98 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:38:04.653: INFO: The status of Pod pod-projected-configmaps-a2ed4c15-71ad-4093-af30-d2d23d1a6c98 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-6efe076d-ffc2-4e35-87d8-e2422de56681 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:06.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1751" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":87,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:03.810: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:09.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-5888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":13,"skipped":223,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:06.763: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating the pod Apr 16 13:38:06.804: INFO: The status of Pod annotationupdate3ab6d9a3-eb74-4a3b-9579-26edf5616fe8 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:38:08.810: INFO: The status of Pod annotationupdate3ab6d9a3-eb74-4a3b-9579-26edf5616fe8 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:38:10.808: INFO: The status of Pod annotationupdate3ab6d9a3-eb74-4a3b-9579-26edf5616fe8 is Running (Ready = true) Apr 16 13:38:11.336: INFO: Successfully updated pod "annotationupdate3ab6d9a3-eb74-4a3b-9579-26edf5616fe8" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:13.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5721" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":122,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:09.879: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating the pod Apr 16 13:38:09.915: INFO: The status of Pod labelsupdate95244afa-6588-49e1-a769-5b68ee3346d5 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:38:11.920: INFO: The status of Pod labelsupdate95244afa-6588-49e1-a769-5b68ee3346d5 is Running (Ready = true) Apr 16 13:38:12.437: INFO: Successfully updated pod "labelsupdate95244afa-6588-49e1-a769-5b68ee3346d5" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:14.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7870" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":235,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:13.390: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:38:13.423: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Apr 16 13:38:15.455: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:16.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-6723" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":7,"skipped":141,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:14.470: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1571 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 16 13:38:14.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1898 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Apr 16 13:38:14.619: INFO: stderr: "" Apr 16 13:38:14.619: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Apr 16 13:38:19.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1898 get pod e2e-test-httpd-pod -o json' Apr 16 13:38:19.749: INFO: stderr: "" Apr 16 13:38:19.749: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2022-04-16T13:38:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1898\",\n \"resourceVersion\": \"4048\",\n \"uid\": \"3f21ef0d-7a08-4ed6-bde1-792021947163\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-llt2h\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-llt2h\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-16T13:38:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-16T13:38:16Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-16T13:38:16Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-16T13:38:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://657b0a2411bd3d82780ff0dce36ead68f90bdfefb091024c8c4dc05b08ba0190\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-04-16T13:38:15Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.7\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.2.18\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.2.18\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-04-16T13:38:14Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Apr 16 13:38:19.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1898 replace -f -' Apr 16 13:38:20.614: INFO: stderr: "" Apr 16 13:38:20.614: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1575 Apr 16 13:38:20.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1898 delete pods e2e-test-httpd-pod' Apr 16 13:38:22.194: INFO: stderr: "" Apr 16 13:38:22.194: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:22.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1898" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":15,"skipped":242,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:16.508: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Pod that fits quota �[1mSTEP�[0m: Ensuring ResourceQuota status captures the pod usage �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota �[1mSTEP�[0m: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) �[1mSTEP�[0m: Ensuring a pod cannot update its resource requirements �[1mSTEP�[0m: Ensuring attempts to update pod resource requirements did not change quota usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:29.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-8782" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":-1,"completed":8,"skipped":165,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:22.312: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-5494 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-5494 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-5494 I0416 13:38:22.378334 19 runners.go:193] Created replication controller with name: externalsvc, namespace: services-5494, replica count: 2 I0416 13:38:25.430766 19 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Apr 16 13:38:25.458: INFO: Creating new exec pod Apr 16 13:38:27.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5494 exec execpodr8k4d -- /bin/sh -x -c nslookup nodeport-service.services-5494.svc.cluster.local' Apr 16 13:38:27.697: INFO: stderr: "+ nslookup nodeport-service.services-5494.svc.cluster.local\n" Apr 16 13:38:27.697: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-5494.svc.cluster.local\tcanonical name = externalsvc.services-5494.svc.cluster.local.\nName:\texternalsvc.services-5494.svc.cluster.local\nAddress: 10.128.231.101\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-5494, will wait for the garbage collector to delete the pods Apr 16 13:38:27.757: INFO: Deleting ReplicationController externalsvc took: 5.215199ms Apr 16 13:38:27.857: INFO: Terminating ReplicationController externalsvc pods took: 100.282325ms Apr 16 13:38:29.769: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:29.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5494" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":16,"skipped":320,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:29.682: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pdb that targets all three pods in a test replica set �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: First trying to evict a pod which shouldn't be evictable �[1mSTEP�[0m: Waiting for all pods to be running Apr 16 13:38:31.732: INFO: pods: 0 < 3 Apr 16 13:38:33.736: INFO: running pods: 2 < 3 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Updating the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: Waiting for the pdb to observed all healthy pods �[1mSTEP�[0m: Patching the pdb to disallow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Deleting the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be deleted �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:37.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7416" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":9,"skipped":190,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:37.851: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:37.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-5251" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":10,"skipped":191,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:37.985: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:38:38.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8710 create -f -' Apr 16 13:38:38.201: INFO: stderr: "" Apr 16 13:38:38.201: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Apr 16 13:38:38.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8710 create -f -' Apr 16 13:38:38.398: INFO: stderr: "" Apr 16 13:38:38.398: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 16 13:38:39.402: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:38:39.402: INFO: Found 1 / 1 Apr 16 13:38:39.402: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 16 13:38:39.405: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:38:39.405: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 16 13:38:39.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8710 describe pod agnhost-primary-zt4hx' Apr 16 13:38:39.485: INFO: stderr: "" Apr 16 13:38:39.485: INFO: stdout: "Name: agnhost-primary-zt4hx\nNamespace: kubectl-8710\nPriority: 0\nNode: k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7/172.18.0.7\nStart Time: Sat, 16 Apr 2022 13:38:38 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.22\nIPs:\n IP: 192.168.2.22\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://28cfad1982df86ae2cd1f44248c88665008294d7d59ad15461fb45b4f05c6ff9\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 16 Apr 2022 13:38:38 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bdz4t (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-bdz4t:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-8710/agnhost-primary-zt4hx to k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7\n Normal Pulled 1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n Normal Created 1s kubelet Created container agnhost-primary\n Normal Started 1s kubelet Started container agnhost-primary\n" Apr 16 13:38:39.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8710 describe rc agnhost-primary' Apr 16 13:38:39.575: INFO: stderr: "" Apr 16 13:38:39.575: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8710\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-zt4hx\n" Apr 16 13:38:39.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8710 describe service agnhost-primary' Apr 16 13:38:39.673: INFO: stderr: "" Apr 16 13:38:39.673: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-8710\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.136.219.33\nIPs: 10.136.219.33\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.2.22:6379\nSession Affinity: None\nEvents: <none>\n" Apr 16 13:38:39.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8710 describe node k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb' Apr 16 13:38:39.784: INFO: stderr: "" Apr 16 13:38:39.784: INFO: stdout: "Name: k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-3a12zq\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-9831xy\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-3a12zq-control-plane\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 16 Apr 2022 13:28:25 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb\n AcquireTime: <unset>\n RenewTime: Sat, 16 Apr 2022 13:38:39 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 16 Apr 2022 13:34:24 +0000 Sat, 16 Apr 2022 13:28:25 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 16 Apr 2022 13:34:24 +0000 Sat, 16 Apr 2022 13:28:25 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 16 Apr 2022 13:34:24 +0000 Sat, 16 Apr 2022 13:28:25 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 16 Apr 2022 13:34:24 +0000 Sat, 16 Apr 2022 13:29:07 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb\nCapacity:\n cpu: 8\n ephemeral-storage: 253882800Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65865236Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253882800Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65865236Ki\n pods: 110\nSystem Info:\n Machine ID: 474e77be92a8499b9da683a1177b84d6\n System UUID: cbe92d4b-8c4e-446e-8a9e-1b5e5cf28dd2\n Boot ID: 19f2108f-6531-4118-b5b3-965673ea4c29\n Kernel Version: 5.4.0-1061-gke\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.2\n Kubelet Version: v1.23.5\n Kube-Proxy Version: v1.23.5\nPodCIDR: 192.168.5.0/24\nPodCIDRs: 192.168.5.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 8m46s\n kube-system kindnet-57zqx 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 10m\n kube-system kube-apiserver-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb 250m (3%) 0 (0%) 0 (0%) 0 (0%) 8m25s\n kube-system kube-controller-manager-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb 200m (2%) 0 (0%) 0 (0%) 0 (0%) 8m42s\n kube-system kube-proxy-rxtrv 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m47s\n kube-system kube-scheduler-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb 100m (1%) 0 (0%) 0 (0%) 0 (0%) 8m43s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (9%) 100m (1%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 9m45s kube-proxy \n Normal Starting 5m45s kube-proxy \n" Apr 16 13:38:39.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8710 describe namespace kubectl-8710' Apr 16 13:38:39.864: INFO: stderr: "" Apr 16 13:38:39.864: INFO: stdout: "Name: kubectl-8710\nLabels: e2e-framework=kubectl\n e2e-run=64cbac1c-f4ab-4e88-8c8b-fd7b0799bb7a\n kubernetes.io/metadata.name=kubectl-8710\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:39.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8710" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":11,"skipped":194,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:39.875: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:38:39.906: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:40.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-4480" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":12,"skipped":194,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:40.484: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:40.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-3209" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":13,"skipped":217,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:40.631: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:38:42.674: INFO: Deleting pod "var-expansion-bb43d414-dde2-40ab-8cca-4459eaa92ae8" in namespace "var-expansion-1395" Apr 16 13:38:42.680: INFO: Wait up to 5m0s for pod "var-expansion-bb43d414-dde2-40ab-8cca-4459eaa92ae8" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:44.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1395" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":-1,"completed":14,"skipped":269,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:44.712: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:38:44.744: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-2665 I0416 13:38:44.751115 17 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2665, replica count: 1 I0416 13:38:45.803227 17 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 13:38:45.915: INFO: Created: latency-svc-m49xp Apr 16 13:38:45.919: INFO: Got endpoints: latency-svc-m49xp [15.801389ms] Apr 16 13:38:45.934: INFO: Created: latency-svc-tvvhb Apr 16 13:38:45.941: INFO: Created: latency-svc-glztp Apr 16 13:38:45.943: INFO: Got endpoints: latency-svc-tvvhb [23.523756ms] Apr 16 13:38:45.961: INFO: Got endpoints: latency-svc-glztp [41.263043ms] Apr 16 13:38:45.965: INFO: Created: latency-svc-7bzp9 Apr 16 13:38:45.970: INFO: Got endpoints: latency-svc-7bzp9 [50.778037ms] Apr 16 13:38:45.975: INFO: Created: latency-svc-cl4pr Apr 16 13:38:45.987: INFO: Got endpoints: latency-svc-cl4pr [67.484907ms] Apr 16 13:38:45.990: INFO: Created: latency-svc-6f5qs Apr 16 13:38:46.002: INFO: Created: latency-svc-q89zc Apr 16 13:38:46.002: INFO: Got endpoints: latency-svc-6f5qs [82.496337ms] Apr 16 13:38:46.014: INFO: Got endpoints: latency-svc-q89zc [94.65591ms] Apr 16 13:38:46.019: INFO: Created: latency-svc-lmr6l Apr 16 13:38:46.023: INFO: Got endpoints: latency-svc-lmr6l [103.251634ms] Apr 16 13:38:46.031: INFO: Created: latency-svc-d8qxk Apr 16 13:38:46.038: INFO: Got endpoints: latency-svc-d8qxk [117.898657ms] Apr 16 13:38:46.038: INFO: Created: latency-svc-5x9l5 Apr 16 13:38:46.044: INFO: Got endpoints: latency-svc-5x9l5 [124.669246ms] Apr 16 13:38:46.047: INFO: Created: latency-svc-jpnl4 Apr 16 13:38:46.055: INFO: Created: latency-svc-w82zc Apr 16 13:38:46.055: INFO: Got endpoints: latency-svc-jpnl4 [135.491153ms] Apr 16 13:38:46.066: INFO: Got endpoints: latency-svc-w82zc [146.126689ms] Apr 16 13:38:46.067: INFO: Created: latency-svc-6mdfn Apr 16 13:38:46.072: INFO: Got endpoints: latency-svc-6mdfn [152.635957ms] Apr 16 13:38:46.073: INFO: Created: latency-svc-wffjn Apr 16 13:38:46.084: INFO: Created: latency-svc-b4s64 Apr 16 13:38:46.084: INFO: Got endpoints: latency-svc-wffjn [164.054879ms] Apr 16 13:38:46.088: INFO: Got endpoints: latency-svc-b4s64 [168.190895ms] Apr 16 13:38:46.092: INFO: Created: latency-svc-r74hq Apr 16 13:38:46.097: INFO: Got endpoints: latency-svc-r74hq [177.395967ms] Apr 16 13:38:46.103: INFO: Created: latency-svc-rdhxr Apr 16 13:38:46.109: INFO: Got endpoints: latency-svc-rdhxr [166.364325ms] Apr 16 13:38:46.113: INFO: Created: latency-svc-78tg8 Apr 16 13:38:46.119: INFO: Got endpoints: latency-svc-78tg8 [157.698851ms] Apr 16 13:38:46.132: INFO: Created: latency-svc-jwfl6 Apr 16 13:38:46.139: INFO: Got endpoints: latency-svc-jwfl6 [169.095799ms] Apr 16 13:38:46.140: INFO: Created: latency-svc-rb8n6 Apr 16 13:38:46.149: INFO: Got endpoints: latency-svc-rb8n6 [161.516332ms] Apr 16 13:38:46.151: INFO: Created: latency-svc-wtmgk Apr 16 13:38:46.158: INFO: Got endpoints: latency-svc-wtmgk [155.549192ms] Apr 16 13:38:46.161: INFO: Created: latency-svc-vv4bl Apr 16 13:38:46.173: INFO: Got endpoints: latency-svc-vv4bl [158.203437ms] Apr 16 13:38:46.175: INFO: Created: latency-svc-th7ng Apr 16 13:38:46.180: INFO: Got endpoints: latency-svc-th7ng [157.166513ms] Apr 16 13:38:46.188: INFO: Created: latency-svc-t4n92 Apr 16 13:38:46.198: INFO: Got endpoints: latency-svc-t4n92 [160.318129ms] Apr 16 13:38:46.210: INFO: Created: latency-svc-k6zzn Apr 16 13:38:46.216: INFO: Got endpoints: latency-svc-k6zzn [171.762977ms] Apr 16 13:38:46.223: INFO: Created: latency-svc-bpkkf Apr 16 13:38:46.232: INFO: Got endpoints: latency-svc-bpkkf [177.250293ms] Apr 16 13:38:46.233: INFO: Created: latency-svc-gs4lb Apr 16 13:38:46.239: INFO: Got endpoints: latency-svc-gs4lb [173.109228ms] Apr 16 13:38:46.242: INFO: Created: latency-svc-6grzp Apr 16 13:38:46.249: INFO: Got endpoints: latency-svc-6grzp [176.250125ms] Apr 16 13:38:46.252: INFO: Created: latency-svc-pgn5f Apr 16 13:38:46.258: INFO: Got endpoints: latency-svc-pgn5f [174.046163ms] Apr 16 13:38:46.263: INFO: Created: latency-svc-th9bq Apr 16 13:38:46.270: INFO: Created: latency-svc-t8xsj Apr 16 13:38:46.270: INFO: Got endpoints: latency-svc-th9bq [181.959054ms] Apr 16 13:38:46.286: INFO: Got endpoints: latency-svc-t8xsj [188.568622ms] Apr 16 13:38:46.286: INFO: Created: latency-svc-g8btn Apr 16 13:38:46.292: INFO: Got endpoints: latency-svc-g8btn [182.51865ms] Apr 16 13:38:46.303: INFO: Created: latency-svc-9mvw4 Apr 16 13:38:46.305: INFO: Got endpoints: latency-svc-9mvw4 [186.229767ms] Apr 16 13:38:46.310: INFO: Created: latency-svc-z6qq9 Apr 16 13:38:46.317: INFO: Got endpoints: latency-svc-z6qq9 [178.094103ms] Apr 16 13:38:46.319: INFO: Created: latency-svc-965hb Apr 16 13:38:46.324: INFO: Got endpoints: latency-svc-965hb [175.290314ms] Apr 16 13:38:46.331: INFO: Created: latency-svc-74kmq Apr 16 13:38:46.337: INFO: Got endpoints: latency-svc-74kmq [179.289551ms] Apr 16 13:38:46.340: INFO: Created: latency-svc-m7xrk Apr 16 13:38:46.347: INFO: Got endpoints: latency-svc-m7xrk [174.11705ms] Apr 16 13:38:46.349: INFO: Created: latency-svc-kkczx Apr 16 13:38:46.356: INFO: Created: latency-svc-rqhbk Apr 16 13:38:46.356: INFO: Got endpoints: latency-svc-kkczx [175.772816ms] Apr 16 13:38:46.362: INFO: Created: latency-svc-wtjxg Apr 16 13:38:46.367: INFO: Created: latency-svc-xcbrb Apr 16 13:38:46.371: INFO: Got endpoints: latency-svc-rqhbk [172.762669ms] Apr 16 13:38:46.377: INFO: Created: latency-svc-hdvnx Apr 16 13:38:46.385: INFO: Created: latency-svc-vmqzb Apr 16 13:38:46.400: INFO: Created: latency-svc-dddxb Apr 16 13:38:46.406: INFO: Created: latency-svc-26tlc Apr 16 13:38:46.411: INFO: Created: latency-svc-gt46m Apr 16 13:38:46.421: INFO: Created: latency-svc-78xnx Apr 16 13:38:46.427: INFO: Got endpoints: latency-svc-wtjxg [210.479876ms] Apr 16 13:38:46.433: INFO: Created: latency-svc-nmv8w Apr 16 13:38:46.443: INFO: Created: latency-svc-mx8pz Apr 16 13:38:46.452: INFO: Created: latency-svc-7vh8v Apr 16 13:38:46.456: INFO: Created: latency-svc-kq8dl Apr 16 13:38:46.461: INFO: Created: latency-svc-9b5c8 Apr 16 13:38:46.467: INFO: Created: latency-svc-d5z7w Apr 16 13:38:46.472: INFO: Got endpoints: latency-svc-xcbrb [239.534173ms] Apr 16 13:38:46.479: INFO: Created: latency-svc-2vkvj Apr 16 13:38:46.489: INFO: Created: latency-svc-v5clx Apr 16 13:38:46.508: INFO: Created: latency-svc-9snjm Apr 16 13:38:46.520: INFO: Got endpoints: latency-svc-hdvnx [281.113599ms] Apr 16 13:38:46.535: INFO: Created: latency-svc-mvft7 Apr 16 13:38:46.571: INFO: Got endpoints: latency-svc-vmqzb [322.436538ms] Apr 16 13:38:46.586: INFO: Created: latency-svc-zp6k5 Apr 16 13:38:46.622: INFO: Got endpoints: latency-svc-dddxb [363.970217ms] Apr 16 13:38:46.636: INFO: Created: latency-svc-hntmx Apr 16 13:38:46.670: INFO: Got endpoints: latency-svc-26tlc [399.492129ms] Apr 16 13:38:46.685: INFO: Created: latency-svc-8n6f9 Apr 16 13:38:46.723: INFO: Got endpoints: latency-svc-gt46m [437.227441ms] Apr 16 13:38:46.742: INFO: Created: latency-svc-vsh7w Apr 16 13:38:46.771: INFO: Got endpoints: latency-svc-78xnx [479.023909ms] Apr 16 13:38:46.791: INFO: Created: latency-svc-gzz7p Apr 16 13:38:46.820: INFO: Got endpoints: latency-svc-nmv8w [515.0223ms] Apr 16 13:38:46.840: INFO: Created: latency-svc-t6ck7 Apr 16 13:38:46.872: INFO: Got endpoints: latency-svc-mx8pz [554.734514ms] Apr 16 13:38:46.887: INFO: Created: latency-svc-nfzwc Apr 16 13:38:46.925: INFO: Got endpoints: latency-svc-7vh8v [600.907455ms] Apr 16 13:38:46.938: INFO: Created: latency-svc-skstk Apr 16 13:38:46.970: INFO: Got endpoints: latency-svc-kq8dl [633.187894ms] Apr 16 13:38:46.993: INFO: Created: latency-svc-dh5w7 Apr 16 13:38:47.022: INFO: Got endpoints: latency-svc-9b5c8 [674.795022ms] Apr 16 13:38:47.050: INFO: Created: latency-svc-wmd86 Apr 16 13:38:47.073: INFO: Got endpoints: latency-svc-d5z7w [717.083627ms] Apr 16 13:38:47.085: INFO: Created: latency-svc-fnz6x Apr 16 13:38:47.120: INFO: Got endpoints: latency-svc-2vkvj [748.56444ms] Apr 16 13:38:47.134: INFO: Created: latency-svc-k7dp7 Apr 16 13:38:47.170: INFO: Got endpoints: latency-svc-v5clx [743.232172ms] Apr 16 13:38:47.184: INFO: Created: latency-svc-8v2p5 Apr 16 13:38:47.221: INFO: Got endpoints: latency-svc-9snjm [747.929229ms] Apr 16 13:38:47.235: INFO: Created: latency-svc-fktwp Apr 16 13:38:47.270: INFO: Got endpoints: latency-svc-mvft7 [749.70705ms] Apr 16 13:38:47.282: INFO: Created: latency-svc-hzvnz Apr 16 13:38:47.322: INFO: Got endpoints: latency-svc-zp6k5 [750.622604ms] Apr 16 13:38:47.336: INFO: Created: latency-svc-dz4hq Apr 16 13:38:47.373: INFO: Got endpoints: latency-svc-hntmx [750.744257ms] Apr 16 13:38:47.388: INFO: Created: latency-svc-tvsdk Apr 16 13:38:47.423: INFO: Got endpoints: latency-svc-8n6f9 [753.726778ms] Apr 16 13:38:47.436: INFO: Created: latency-svc-64m8t Apr 16 13:38:47.470: INFO: Got endpoints: latency-svc-vsh7w [746.316162ms] Apr 16 13:38:47.484: INFO: Created: latency-svc-vsjmz Apr 16 13:38:47.521: INFO: Got endpoints: latency-svc-gzz7p [748.707948ms] Apr 16 13:38:47.539: INFO: Created: latency-svc-7p9vp Apr 16 13:38:47.574: INFO: Got endpoints: latency-svc-t6ck7 [753.680122ms] Apr 16 13:38:47.587: INFO: Created: latency-svc-nwkgx Apr 16 13:38:47.620: INFO: Got endpoints: latency-svc-nfzwc [747.682552ms] Apr 16 13:38:47.636: INFO: Created: latency-svc-ct4r9 Apr 16 13:38:47.670: INFO: Got endpoints: latency-svc-skstk [744.709561ms] Apr 16 13:38:47.689: INFO: Created: latency-svc-dvnxk Apr 16 13:38:47.720: INFO: Got endpoints: latency-svc-dh5w7 [749.227076ms] Apr 16 13:38:47.735: INFO: Created: latency-svc-jjxbs Apr 16 13:38:47.771: INFO: Got endpoints: latency-svc-wmd86 [748.697889ms] Apr 16 13:38:47.790: INFO: Created: latency-svc-jdntp Apr 16 13:38:47.823: INFO: Got endpoints: latency-svc-fnz6x [749.985245ms] Apr 16 13:38:47.838: INFO: Created: latency-svc-8bqmq Apr 16 13:38:47.871: INFO: Got endpoints: latency-svc-k7dp7 [749.960527ms] Apr 16 13:38:47.885: INFO: Created: latency-svc-256wb Apr 16 13:38:47.920: INFO: Got endpoints: latency-svc-8v2p5 [749.90489ms] Apr 16 13:38:47.936: INFO: Created: latency-svc-qvtgv Apr 16 13:38:47.973: INFO: Got endpoints: latency-svc-fktwp [752.142588ms] Apr 16 13:38:47.991: INFO: Created: latency-svc-vnvmx Apr 16 13:38:48.023: INFO: Got endpoints: latency-svc-hzvnz [753.095858ms] Apr 16 13:38:48.040: INFO: Created: latency-svc-fcf6m Apr 16 13:38:48.073: INFO: Got endpoints: latency-svc-dz4hq [750.551263ms] Apr 16 13:38:48.087: INFO: Created: latency-svc-24b6m Apr 16 13:38:48.120: INFO: Got endpoints: latency-svc-tvsdk [747.66062ms] Apr 16 13:38:48.134: INFO: Created: latency-svc-q24hd Apr 16 13:38:48.170: INFO: Got endpoints: latency-svc-64m8t [746.433349ms] Apr 16 13:38:48.183: INFO: Created: latency-svc-dtlll Apr 16 13:38:48.233: INFO: Got endpoints: latency-svc-vsjmz [763.329747ms] Apr 16 13:38:48.270: INFO: Created: latency-svc-6crpn Apr 16 13:38:48.274: INFO: Got endpoints: latency-svc-7p9vp [753.124155ms] Apr 16 13:38:48.284: INFO: Created: latency-svc-2v7fl Apr 16 13:38:48.326: INFO: Got endpoints: latency-svc-nwkgx [751.76985ms] Apr 16 13:38:48.337: INFO: Created: latency-svc-ltlqf Apr 16 13:38:48.369: INFO: Got endpoints: latency-svc-ct4r9 [748.804619ms] Apr 16 13:38:48.381: INFO: Created: latency-svc-hqgr9 Apr 16 13:38:48.424: INFO: Got endpoints: latency-svc-dvnxk [753.746784ms] Apr 16 13:38:48.435: INFO: Created: latency-svc-f5crm Apr 16 13:38:48.470: INFO: Got endpoints: latency-svc-jjxbs [750.086274ms] Apr 16 13:38:48.479: INFO: Created: latency-svc-dx6gc Apr 16 13:38:48.520: INFO: Got endpoints: latency-svc-jdntp [749.659245ms] Apr 16 13:38:48.535: INFO: Created: latency-svc-fp89p Apr 16 13:38:48.570: INFO: Got endpoints: latency-svc-8bqmq [747.051132ms] Apr 16 13:38:48.580: INFO: Created: latency-svc-5dd56 Apr 16 13:38:48.622: INFO: Got endpoints: latency-svc-256wb [750.80615ms] Apr 16 13:38:48.637: INFO: Created: latency-svc-vh67w Apr 16 13:38:48.670: INFO: Got endpoints: latency-svc-qvtgv [750.278953ms] Apr 16 13:38:48.682: INFO: Created: latency-svc-n6rwg Apr 16 13:38:48.722: INFO: Got endpoints: latency-svc-vnvmx [749.370744ms] Apr 16 13:38:48.733: INFO: Created: latency-svc-zwvrp Apr 16 13:38:48.770: INFO: Got endpoints: latency-svc-fcf6m [746.365916ms] Apr 16 13:38:48.781: INFO: Created: latency-svc-t8gxr Apr 16 13:38:48.822: INFO: Got endpoints: latency-svc-24b6m [749.512232ms] Apr 16 13:38:48.834: INFO: Created: latency-svc-qp9gr Apr 16 13:38:48.872: INFO: Got endpoints: latency-svc-q24hd [752.037039ms] Apr 16 13:38:48.884: INFO: Created: latency-svc-m9lq6 Apr 16 13:38:48.920: INFO: Got endpoints: latency-svc-dtlll [749.582359ms] Apr 16 13:38:48.932: INFO: Created: latency-svc-9js5p Apr 16 13:38:48.970: INFO: Got endpoints: latency-svc-6crpn [736.865765ms] Apr 16 13:38:48.988: INFO: Created: latency-svc-rd9cv Apr 16 13:38:49.028: INFO: Got endpoints: latency-svc-2v7fl [754.438009ms] Apr 16 13:38:49.051: INFO: Created: latency-svc-px7l5 Apr 16 13:38:49.072: INFO: Got endpoints: latency-svc-ltlqf [746.022306ms] Apr 16 13:38:49.090: INFO: Created: latency-svc-b8zqh Apr 16 13:38:49.120: INFO: Got endpoints: latency-svc-hqgr9 [751.045439ms] Apr 16 13:38:49.131: INFO: Created: latency-svc-9ndrc Apr 16 13:38:49.170: INFO: Got endpoints: latency-svc-f5crm [746.531874ms] Apr 16 13:38:49.182: INFO: Created: latency-svc-jvzf4 Apr 16 13:38:49.222: INFO: Got endpoints: latency-svc-dx6gc [751.962308ms] Apr 16 13:38:49.233: INFO: Created: latency-svc-nnfcf Apr 16 13:38:49.274: INFO: Got endpoints: latency-svc-fp89p [753.004751ms] Apr 16 13:38:49.283: INFO: Created: latency-svc-t6x88 Apr 16 13:38:49.323: INFO: Got endpoints: latency-svc-5dd56 [752.469088ms] Apr 16 13:38:49.334: INFO: Created: latency-svc-8s5n7 Apr 16 13:38:49.375: INFO: Got endpoints: latency-svc-vh67w [752.313206ms] Apr 16 13:38:49.386: INFO: Created: latency-svc-2q94g Apr 16 13:38:49.422: INFO: Got endpoints: latency-svc-n6rwg [751.010287ms] Apr 16 13:38:49.433: INFO: Created: latency-svc-zmjp9 Apr 16 13:38:49.472: INFO: Got endpoints: latency-svc-zwvrp [749.926156ms] Apr 16 13:38:49.489: INFO: Created: latency-svc-jzxmw Apr 16 13:38:49.522: INFO: Got endpoints: latency-svc-t8gxr [751.738221ms] Apr 16 13:38:49.534: INFO: Created: latency-svc-dqmsm Apr 16 13:38:49.572: INFO: Got endpoints: latency-svc-qp9gr [750.052608ms] Apr 16 13:38:49.583: INFO: Created: latency-svc-r69rm Apr 16 13:38:49.620: INFO: Got endpoints: latency-svc-m9lq6 [747.678258ms] Apr 16 13:38:49.636: INFO: Created: latency-svc-vddlb Apr 16 13:38:49.671: INFO: Got endpoints: latency-svc-9js5p [751.44458ms] Apr 16 13:38:49.681: INFO: Created: latency-svc-s44jz Apr 16 13:38:49.721: INFO: Got endpoints: latency-svc-rd9cv [751.123024ms] Apr 16 13:38:49.733: INFO: Created: latency-svc-s4rj6 Apr 16 13:38:49.770: INFO: Got endpoints: latency-svc-px7l5 [741.656843ms] Apr 16 13:38:49.787: INFO: Created: latency-svc-cs8xs Apr 16 13:38:49.825: INFO: Got endpoints: latency-svc-b8zqh [753.484055ms] Apr 16 13:38:49.836: INFO: Created: latency-svc-26r5k Apr 16 13:38:49.872: INFO: Got endpoints: latency-svc-9ndrc [751.577799ms] Apr 16 13:38:49.884: INFO: Created: latency-svc-j5xjs Apr 16 13:38:49.921: INFO: Got endpoints: latency-svc-jvzf4 [751.028163ms] Apr 16 13:38:49.932: INFO: Created: latency-svc-xxfpb Apr 16 13:38:49.971: INFO: Got endpoints: latency-svc-nnfcf [748.794807ms] Apr 16 13:38:49.991: INFO: Created: latency-svc-hgf9k Apr 16 13:38:50.025: INFO: Got endpoints: latency-svc-t6x88 [751.261101ms] Apr 16 13:38:50.044: INFO: Created: latency-svc-xbrc4 Apr 16 13:38:50.075: INFO: Got endpoints: latency-svc-8s5n7 [752.154681ms] Apr 16 13:38:50.095: INFO: Created: latency-svc-mr9pk Apr 16 13:38:50.124: INFO: Got endpoints: latency-svc-2q94g [749.604086ms] Apr 16 13:38:50.135: INFO: Created: latency-svc-s66t5 Apr 16 13:38:50.170: INFO: Got endpoints: latency-svc-zmjp9 [747.941916ms] Apr 16 13:38:50.182: INFO: Created: latency-svc-g7fg7 Apr 16 13:38:50.220: INFO: Got endpoints: latency-svc-jzxmw [747.425802ms] Apr 16 13:38:50.234: INFO: Created: latency-svc-4k65c Apr 16 13:38:50.270: INFO: Got endpoints: latency-svc-dqmsm [747.799665ms] Apr 16 13:38:50.280: INFO: Created: latency-svc-8hlrn Apr 16 13:38:50.319: INFO: Got endpoints: latency-svc-r69rm [746.645767ms] Apr 16 13:38:50.335: INFO: Created: latency-svc-fcmqb Apr 16 13:38:50.371: INFO: Got endpoints: latency-svc-vddlb [750.700926ms] Apr 16 13:38:50.382: INFO: Created: latency-svc-4r9b6 Apr 16 13:38:50.423: INFO: Got endpoints: latency-svc-s44jz [752.366843ms] Apr 16 13:38:50.433: INFO: Created: latency-svc-v4v8b Apr 16 13:38:50.470: INFO: Got endpoints: latency-svc-s4rj6 [748.758086ms] Apr 16 13:38:50.480: INFO: Created: latency-svc-5j59s Apr 16 13:38:50.520: INFO: Got endpoints: latency-svc-cs8xs [749.620772ms] Apr 16 13:38:50.532: INFO: Created: latency-svc-gmj4f Apr 16 13:38:50.570: INFO: Got endpoints: latency-svc-26r5k [745.052403ms] Apr 16 13:38:50.581: INFO: Created: latency-svc-vj8rl Apr 16 13:38:50.621: INFO: Got endpoints: latency-svc-j5xjs [747.947116ms] Apr 16 13:38:50.632: INFO: Created: latency-svc-bggbc Apr 16 13:38:50.672: INFO: Got endpoints: latency-svc-xxfpb [750.620944ms] Apr 16 13:38:50.684: INFO: Created: latency-svc-ddvb4 Apr 16 13:38:50.723: INFO: Got endpoints: latency-svc-hgf9k [751.857583ms] Apr 16 13:38:50.745: INFO: Created: latency-svc-gsthj Apr 16 13:38:50.771: INFO: Got endpoints: latency-svc-xbrc4 [745.852667ms] Apr 16 13:38:50.781: INFO: Created: latency-svc-xtt5d Apr 16 13:38:50.820: INFO: Got endpoints: latency-svc-mr9pk [744.501144ms] Apr 16 13:38:50.830: INFO: Created: latency-svc-n4dvn Apr 16 13:38:50.871: INFO: Got endpoints: latency-svc-s66t5 [746.950789ms] Apr 16 13:38:50.881: INFO: Created: latency-svc-jlvd6 Apr 16 13:38:50.927: INFO: Got endpoints: latency-svc-g7fg7 [757.487163ms] Apr 16 13:38:50.949: INFO: Created: latency-svc-9kk2n Apr 16 13:38:50.975: INFO: Got endpoints: latency-svc-4k65c [754.97317ms] Apr 16 13:38:50.988: INFO: Created: latency-svc-kjlqf Apr 16 13:38:51.026: INFO: Got endpoints: latency-svc-8hlrn [756.084634ms] Apr 16 13:38:51.042: INFO: Created: latency-svc-zdhnd Apr 16 13:38:51.070: INFO: Got endpoints: latency-svc-fcmqb [750.511753ms] Apr 16 13:38:51.086: INFO: Created: latency-svc-hc7q7 Apr 16 13:38:51.122: INFO: Got endpoints: latency-svc-4r9b6 [751.251296ms] Apr 16 13:38:51.132: INFO: Created: latency-svc-nqj6s Apr 16 13:38:51.174: INFO: Got endpoints: latency-svc-v4v8b [750.119694ms] Apr 16 13:38:51.192: INFO: Created: latency-svc-8fvz2 Apr 16 13:38:51.219: INFO: Got endpoints: latency-svc-5j59s [749.558971ms] Apr 16 13:38:51.230: INFO: Created: latency-svc-p7dpt Apr 16 13:38:51.271: INFO: Got endpoints: latency-svc-gmj4f [751.005232ms] Apr 16 13:38:51.283: INFO: Created: latency-svc-mn72r Apr 16 13:38:51.322: INFO: Got endpoints: latency-svc-vj8rl [751.182365ms] Apr 16 13:38:51.335: INFO: Created: latency-svc-hnpvm Apr 16 13:38:51.371: INFO: Got endpoints: latency-svc-bggbc [749.892665ms] Apr 16 13:38:51.383: INFO: Created: latency-svc-g65fj Apr 16 13:38:51.420: INFO: Got endpoints: latency-svc-ddvb4 [747.627286ms] Apr 16 13:38:51.430: INFO: Created: latency-svc-mcf7w Apr 16 13:38:51.470: INFO: Got endpoints: latency-svc-gsthj [746.71935ms] Apr 16 13:38:51.483: INFO: Created: latency-svc-zxrcm Apr 16 13:38:51.519: INFO: Got endpoints: latency-svc-xtt5d [748.42294ms] Apr 16 13:38:51.531: INFO: Created: latency-svc-d96lj Apr 16 13:38:51.573: INFO: Got endpoints: latency-svc-n4dvn [753.221537ms] Apr 16 13:38:51.588: INFO: Created: latency-svc-jrmk5 Apr 16 13:38:51.621: INFO: Got endpoints: latency-svc-jlvd6 [749.715672ms] Apr 16 13:38:51.633: INFO: Created: latency-svc-vjhzx Apr 16 13:38:51.669: INFO: Got endpoints: latency-svc-9kk2n [742.245028ms] Apr 16 13:38:51.680: INFO: Created: latency-svc-ptnpv Apr 16 13:38:51.720: INFO: Got endpoints: latency-svc-kjlqf [744.731995ms] Apr 16 13:38:51.730: INFO: Created: latency-svc-6xtf6 Apr 16 13:38:51.770: INFO: Got endpoints: latency-svc-zdhnd [743.809657ms] Apr 16 13:38:51.784: INFO: Created: latency-svc-h7vt6 Apr 16 13:38:51.821: INFO: Got endpoints: latency-svc-hc7q7 [750.709968ms] Apr 16 13:38:51.834: INFO: Created: latency-svc-md5nl Apr 16 13:38:51.871: INFO: Got endpoints: latency-svc-nqj6s [748.609807ms] Apr 16 13:38:51.882: INFO: Created: latency-svc-mgrwj Apr 16 13:38:51.926: INFO: Got endpoints: latency-svc-8fvz2 [752.391972ms] Apr 16 13:38:51.939: INFO: Created: latency-svc-zwztg Apr 16 13:38:51.970: INFO: Got endpoints: latency-svc-p7dpt [750.450049ms] Apr 16 13:38:51.984: INFO: Created: latency-svc-ddhxk Apr 16 13:38:52.025: INFO: Got endpoints: latency-svc-mn72r [754.241941ms] Apr 16 13:38:52.037: INFO: Created: latency-svc-cfv57 Apr 16 13:38:52.070: INFO: Got endpoints: latency-svc-hnpvm [748.542957ms] Apr 16 13:38:52.090: INFO: Created: latency-svc-bxptm Apr 16 13:38:52.121: INFO: Got endpoints: latency-svc-g65fj [749.814337ms] Apr 16 13:38:52.130: INFO: Created: latency-svc-mzvmg Apr 16 13:38:52.170: INFO: Got endpoints: latency-svc-mcf7w [749.959616ms] Apr 16 13:38:52.185: INFO: Created: latency-svc-2rxwq Apr 16 13:38:52.221: INFO: Got endpoints: latency-svc-zxrcm [750.669207ms] Apr 16 13:38:52.230: INFO: Created: latency-svc-6dxhh Apr 16 13:38:52.271: INFO: Got endpoints: latency-svc-d96lj [751.923869ms] Apr 16 13:38:52.282: INFO: Created: latency-svc-wstr7 Apr 16 13:38:52.320: INFO: Got endpoints: latency-svc-jrmk5 [747.393335ms] Apr 16 13:38:52.332: INFO: Created: latency-svc-qhd85 Apr 16 13:38:52.369: INFO: Got endpoints: latency-svc-vjhzx [748.286318ms] Apr 16 13:38:52.379: INFO: Created: latency-svc-w7rpm Apr 16 13:38:52.420: INFO: Got endpoints: latency-svc-ptnpv [750.855352ms] Apr 16 13:38:52.432: INFO: Created: latency-svc-gqkqv Apr 16 13:38:52.470: INFO: Got endpoints: latency-svc-6xtf6 [750.181768ms] Apr 16 13:38:52.480: INFO: Created: latency-svc-qwvgm Apr 16 13:38:52.520: INFO: Got endpoints: latency-svc-h7vt6 [749.922995ms] Apr 16 13:38:52.535: INFO: Created: latency-svc-hgxsf Apr 16 13:38:52.570: INFO: Got endpoints: latency-svc-md5nl [749.283915ms] Apr 16 13:38:52.581: INFO: Created: latency-svc-kjffc Apr 16 13:38:52.620: INFO: Got endpoints: latency-svc-mgrwj [748.87759ms] Apr 16 13:38:52.635: INFO: Created: latency-svc-64xm6 Apr 16 13:38:52.671: INFO: Got endpoints: latency-svc-zwztg [745.096139ms] Apr 16 13:38:52.682: INFO: Created: latency-svc-gtmx7 Apr 16 13:38:52.720: INFO: Got endpoints: latency-svc-ddhxk [749.833378ms] Apr 16 13:38:52.731: INFO: Created: latency-svc-zz5vh Apr 16 13:38:52.774: INFO: Got endpoints: latency-svc-cfv57 [749.000427ms] Apr 16 13:38:52.784: INFO: Created: latency-svc-2vnhn Apr 16 13:38:52.822: INFO: Got endpoints: latency-svc-bxptm [751.731415ms] Apr 16 13:38:52.833: INFO: Created: latency-svc-s95dq Apr 16 13:38:52.870: INFO: Got endpoints: latency-svc-mzvmg [747.916764ms] Apr 16 13:38:52.886: INFO: Created: latency-svc-j66tg Apr 16 13:38:52.924: INFO: Got endpoints: latency-svc-2rxwq [753.866459ms] Apr 16 13:38:52.934: INFO: Created: latency-svc-mnftq Apr 16 13:38:52.970: INFO: Got endpoints: latency-svc-6dxhh [748.487825ms] Apr 16 13:38:52.992: INFO: Created: latency-svc-6wjmj Apr 16 13:38:53.026: INFO: Got endpoints: latency-svc-wstr7 [753.918504ms] Apr 16 13:38:53.040: INFO: Created: latency-svc-b5qq4 Apr 16 13:38:53.073: INFO: Got endpoints: latency-svc-qhd85 [751.956834ms] Apr 16 13:38:53.086: INFO: Created: latency-svc-tcl5l Apr 16 13:38:53.128: INFO: Got endpoints: latency-svc-w7rpm [758.896997ms] Apr 16 13:38:53.137: INFO: Created: latency-svc-7wl24 Apr 16 13:38:53.173: INFO: Got endpoints: latency-svc-gqkqv [752.926916ms] Apr 16 13:38:53.183: INFO: Created: latency-svc-d6p24 Apr 16 13:38:53.219: INFO: Got endpoints: latency-svc-qwvgm [749.597231ms] Apr 16 13:38:53.228: INFO: Created: latency-svc-5swhn Apr 16 13:38:53.272: INFO: Got endpoints: latency-svc-hgxsf [751.718188ms] Apr 16 13:38:53.282: INFO: Created: latency-svc-kvz52 Apr 16 13:38:53.322: INFO: Got endpoints: latency-svc-kjffc [751.942599ms] Apr 16 13:38:53.333: INFO: Created: latency-svc-74f4b Apr 16 13:38:53.372: INFO: Got endpoints: latency-svc-64xm6 [751.81218ms] Apr 16 13:38:53.382: INFO: Created: latency-svc-gbhhx Apr 16 13:38:53.422: INFO: Got endpoints: latency-svc-gtmx7 [750.418861ms] Apr 16 13:38:53.434: INFO: Created: latency-svc-2nhtr Apr 16 13:38:53.473: INFO: Got endpoints: latency-svc-zz5vh [751.839393ms] Apr 16 13:38:53.497: INFO: Created: latency-svc-8kjvh Apr 16 13:38:53.525: INFO: Got endpoints: latency-svc-2vnhn [750.82751ms] Apr 16 13:38:53.538: INFO: Created: latency-svc-sqdzb Apr 16 13:38:53.570: INFO: Got endpoints: latency-svc-s95dq [747.443666ms] Apr 16 13:38:53.580: INFO: Created: latency-svc-rdmrr Apr 16 13:38:53.623: INFO: Got endpoints: latency-svc-j66tg [753.559833ms] Apr 16 13:38:53.633: INFO: Created: latency-svc-9cgq6 Apr 16 13:38:53.670: INFO: Got endpoints: latency-svc-mnftq [746.070058ms] Apr 16 13:38:53.680: INFO: Created: latency-svc-g495d Apr 16 13:38:53.720: INFO: Got endpoints: latency-svc-6wjmj [750.334103ms] Apr 16 13:38:53.733: INFO: Created: latency-svc-d5p6w Apr 16 13:38:53.771: INFO: Got endpoints: latency-svc-b5qq4 [745.383715ms] Apr 16 13:38:53.820: INFO: Got endpoints: latency-svc-tcl5l [746.975049ms] Apr 16 13:38:53.873: INFO: Got endpoints: latency-svc-7wl24 [744.34889ms] Apr 16 13:38:53.920: INFO: Got endpoints: latency-svc-d6p24 [746.539155ms] Apr 16 13:38:53.972: INFO: Got endpoints: latency-svc-5swhn [752.402535ms] Apr 16 13:38:54.028: INFO: Got endpoints: latency-svc-kvz52 [755.73473ms] Apr 16 13:38:54.071: INFO: Got endpoints: latency-svc-74f4b [749.119603ms] Apr 16 13:38:54.125: INFO: Got endpoints: latency-svc-gbhhx [752.665853ms] Apr 16 13:38:54.170: INFO: Got endpoints: latency-svc-2nhtr [748.171506ms] Apr 16 13:38:54.220: INFO: Got endpoints: latency-svc-8kjvh [747.119195ms] Apr 16 13:38:54.273: INFO: Got endpoints: latency-svc-sqdzb [747.567227ms] Apr 16 13:38:54.322: INFO: Got endpoints: latency-svc-rdmrr [751.594025ms] Apr 16 13:38:54.369: INFO: Got endpoints: latency-svc-9cgq6 [746.142953ms] Apr 16 13:38:54.420: INFO: Got endpoints: latency-svc-g495d [750.357113ms] Apr 16 13:38:54.470: INFO: Got endpoints: latency-svc-d5p6w [750.416658ms] Apr 16 13:38:54.471: INFO: Latencies: [23.523756ms 41.263043ms 50.778037ms 67.484907ms 82.496337ms 94.65591ms 103.251634ms 117.898657ms 124.669246ms 135.491153ms 146.126689ms 152.635957ms 155.549192ms 157.166513ms 157.698851ms 158.203437ms 160.318129ms 161.516332ms 164.054879ms 166.364325ms 168.190895ms 169.095799ms 171.762977ms 172.762669ms 173.109228ms 174.046163ms 174.11705ms 175.290314ms 175.772816ms 176.250125ms 177.250293ms 177.395967ms 178.094103ms 179.289551ms 181.959054ms 182.51865ms 186.229767ms 188.568622ms 210.479876ms 239.534173ms 281.113599ms 322.436538ms 363.970217ms 399.492129ms 437.227441ms 479.023909ms 515.0223ms 554.734514ms 600.907455ms 633.187894ms 674.795022ms 717.083627ms 736.865765ms 741.656843ms 742.245028ms 743.232172ms 743.809657ms 744.34889ms 744.501144ms 744.709561ms 744.731995ms 745.052403ms 745.096139ms 745.383715ms 745.852667ms 746.022306ms 746.070058ms 746.142953ms 746.316162ms 746.365916ms 746.433349ms 746.531874ms 746.539155ms 746.645767ms 746.71935ms 746.950789ms 746.975049ms 747.051132ms 747.119195ms 747.393335ms 747.425802ms 747.443666ms 747.567227ms 747.627286ms 747.66062ms 747.678258ms 747.682552ms 747.799665ms 747.916764ms 747.929229ms 747.941916ms 747.947116ms 748.171506ms 748.286318ms 748.42294ms 748.487825ms 748.542957ms 748.56444ms 748.609807ms 748.697889ms 748.707948ms 748.758086ms 748.794807ms 748.804619ms 748.87759ms 749.000427ms 749.119603ms 749.227076ms 749.283915ms 749.370744ms 749.512232ms 749.558971ms 749.582359ms 749.597231ms 749.604086ms 749.620772ms 749.659245ms 749.70705ms 749.715672ms 749.814337ms 749.833378ms 749.892665ms 749.90489ms 749.922995ms 749.926156ms 749.959616ms 749.960527ms 749.985245ms 750.052608ms 750.086274ms 750.119694ms 750.181768ms 750.278953ms 750.334103ms 750.357113ms 750.416658ms 750.418861ms 750.450049ms 750.511753ms 750.551263ms 750.620944ms 750.622604ms 750.669207ms 750.700926ms 750.709968ms 750.744257ms 750.80615ms 750.82751ms 750.855352ms 751.005232ms 751.010287ms 751.028163ms 751.045439ms 751.123024ms 751.182365ms 751.251296ms 751.261101ms 751.44458ms 751.577799ms 751.594025ms 751.718188ms 751.731415ms 751.738221ms 751.76985ms 751.81218ms 751.839393ms 751.857583ms 751.923869ms 751.942599ms 751.956834ms 751.962308ms 752.037039ms 752.142588ms 752.154681ms 752.313206ms 752.366843ms 752.391972ms 752.402535ms 752.469088ms 752.665853ms 752.926916ms 753.004751ms 753.095858ms 753.124155ms 753.221537ms 753.484055ms 753.559833ms 753.680122ms 753.726778ms 753.746784ms 753.866459ms 753.918504ms 754.241941ms 754.438009ms 754.97317ms 755.73473ms 756.084634ms 757.487163ms 758.896997ms 763.329747ms] Apr 16 13:38:54.471: INFO: 50 %ile: 748.707948ms Apr 16 13:38:54.471: INFO: 90 %ile: 752.926916ms Apr 16 13:38:54.471: INFO: 99 %ile: 758.896997ms Apr 16 13:38:54.471: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:54.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-2665" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":15,"skipped":282,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:54.584: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-4701714d-5fff-450d-bb4b-4faf017bb7c1 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:38:54.624: INFO: Waiting up to 5m0s for pod "pod-configmaps-5fa0fbd5-0e24-43a8-8734-5566a21ecc6c" in namespace "configmap-5549" to be "Succeeded or Failed" Apr 16 13:38:54.626: INFO: Pod "pod-configmaps-5fa0fbd5-0e24-43a8-8734-5566a21ecc6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.757329ms Apr 16 13:38:56.630: INFO: Pod "pod-configmaps-5fa0fbd5-0e24-43a8-8734-5566a21ecc6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006324586s �[1mSTEP�[0m: Saw pod success Apr 16 13:38:56.630: INFO: Pod "pod-configmaps-5fa0fbd5-0e24-43a8-8734-5566a21ecc6c" satisfied condition "Succeeded or Failed" Apr 16 13:38:56.633: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-configmaps-5fa0fbd5-0e24-43a8-8734-5566a21ecc6c container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:38:56.645: INFO: Waiting for pod pod-configmaps-5fa0fbd5-0e24-43a8-8734-5566a21ecc6c to disappear Apr 16 13:38:56.647: INFO: Pod pod-configmaps-5fa0fbd5-0e24-43a8-8734-5566a21ecc6c no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:38:56.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-5549" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":353,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:56.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:38:57.198: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:39:00.219: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:00.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1916" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1916-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":17,"skipped":362,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:00.368: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:39:00.457: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df4c3ea6-39ec-4fbf-b846-ab3b63bab002" in namespace "downward-api-9080" to be "Succeeded or Failed" Apr 16 13:39:00.462: INFO: Pod "downwardapi-volume-df4c3ea6-39ec-4fbf-b846-ab3b63bab002": Phase="Pending", Reason="", readiness=false. Elapsed: 5.386938ms Apr 16 13:39:02.466: INFO: Pod "downwardapi-volume-df4c3ea6-39ec-4fbf-b846-ab3b63bab002": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009071764s �[1mSTEP�[0m: Saw pod success Apr 16 13:39:02.466: INFO: Pod "downwardapi-volume-df4c3ea6-39ec-4fbf-b846-ab3b63bab002" satisfied condition "Succeeded or Failed" Apr 16 13:39:02.471: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod downwardapi-volume-df4c3ea6-39ec-4fbf-b846-ab3b63bab002 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:39:02.488: INFO: Waiting for pod downwardapi-volume-df4c3ea6-39ec-4fbf-b846-ab3b63bab002 to disappear Apr 16 13:39:02.493: INFO: Pod downwardapi-volume-df4c3ea6-39ec-4fbf-b846-ab3b63bab002 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:02.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9080" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":380,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:02.586: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-map-5ab4f699-726f-49c0-b0c1-46827b69ce3f �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:39:02.649: INFO: Waiting up to 5m0s for pod "pod-secrets-3cc19646-24a0-4042-b77c-030a49e0bdbf" in namespace "secrets-2399" to be "Succeeded or Failed" Apr 16 13:39:02.655: INFO: Pod "pod-secrets-3cc19646-24a0-4042-b77c-030a49e0bdbf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.154041ms Apr 16 13:39:04.660: INFO: Pod "pod-secrets-3cc19646-24a0-4042-b77c-030a49e0bdbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010476835s �[1mSTEP�[0m: Saw pod success Apr 16 13:39:04.660: INFO: Pod "pod-secrets-3cc19646-24a0-4042-b77c-030a49e0bdbf" satisfied condition "Succeeded or Failed" Apr 16 13:39:04.663: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-secrets-3cc19646-24a0-4042-b77c-030a49e0bdbf container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:39:04.678: INFO: Waiting for pod pod-secrets-3cc19646-24a0-4042-b77c-030a49e0bdbf to disappear Apr 16 13:39:04.681: INFO: Pod pod-secrets-3cc19646-24a0-4042-b77c-030a49e0bdbf no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:04.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2399" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":419,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:04.703: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-ea2cb440-cfd9-4515-a7f8-06184e09d212 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:39:04.777: INFO: Waiting up to 5m0s for pod "pod-secrets-4776a1a2-90c5-466e-b6f9-a3d6e930e361" in namespace "secrets-6754" to be "Succeeded or Failed" Apr 16 13:39:04.781: INFO: Pod "pod-secrets-4776a1a2-90c5-466e-b6f9-a3d6e930e361": Phase="Pending", Reason="", readiness=false. Elapsed: 4.617578ms Apr 16 13:39:06.786: INFO: Pod "pod-secrets-4776a1a2-90c5-466e-b6f9-a3d6e930e361": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009114225s �[1mSTEP�[0m: Saw pod success Apr 16 13:39:06.786: INFO: Pod "pod-secrets-4776a1a2-90c5-466e-b6f9-a3d6e930e361" satisfied condition "Succeeded or Failed" Apr 16 13:39:06.789: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-secrets-4776a1a2-90c5-466e-b6f9-a3d6e930e361 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:39:06.810: INFO: Waiting for pod pod-secrets-4776a1a2-90c5-466e-b6f9-a3d6e930e361 to disappear Apr 16 13:39:06.814: INFO: Pod pod-secrets-4776a1a2-90c5-466e-b6f9-a3d6e930e361 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:06.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6754" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":425,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:06.850: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating Agnhost RC Apr 16 13:39:06.889: INFO: namespace kubectl-8507 Apr 16 13:39:06.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8507 create -f -' Apr 16 13:39:07.788: INFO: stderr: "" Apr 16 13:39:07.788: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 16 13:39:08.792: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:39:08.792: INFO: Found 1 / 1 Apr 16 13:39:08.792: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 16 13:39:08.795: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:39:08.795: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 16 13:39:08.795: INFO: wait on agnhost-primary startup in kubectl-8507 Apr 16 13:39:08.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8507 logs agnhost-primary-nr9js agnhost-primary' Apr 16 13:39:08.918: INFO: stderr: "" Apr 16 13:39:08.918: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Apr 16 13:39:08.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8507 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Apr 16 13:39:09.034: INFO: stderr: "" Apr 16 13:39:09.034: INFO: stdout: "service/rm2 exposed\n" Apr 16 13:39:09.038: INFO: Service rm2 in namespace kubectl-8507 found. Apr 16 13:39:11.042: INFO: Get endpoints failed (interval 2s): endpoints "rm2" not found �[1mSTEP�[0m: exposing service Apr 16 13:39:13.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8507 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Apr 16 13:39:13.142: INFO: stderr: "" Apr 16 13:39:13.142: INFO: stdout: "service/rm3 exposed\n" Apr 16 13:39:13.147: INFO: Service rm3 in namespace kubectl-8507 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:15.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8507" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":21,"skipped":433,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:15.189: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-0266ccda-3d16-4b79-b81f-08c81130c2b0 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:39:15.231: INFO: Waiting up to 5m0s for pod "pod-configmaps-7eb89823-c3f6-43e3-b9ab-d93d5959dcdf" in namespace "configmap-8724" to be "Succeeded or Failed" Apr 16 13:39:15.235: INFO: Pod "pod-configmaps-7eb89823-c3f6-43e3-b9ab-d93d5959dcdf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.739384ms Apr 16 13:39:17.240: INFO: Pod "pod-configmaps-7eb89823-c3f6-43e3-b9ab-d93d5959dcdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008486725s �[1mSTEP�[0m: Saw pod success Apr 16 13:39:17.240: INFO: Pod "pod-configmaps-7eb89823-c3f6-43e3-b9ab-d93d5959dcdf" satisfied condition "Succeeded or Failed" Apr 16 13:39:17.244: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-configmaps-7eb89823-c3f6-43e3-b9ab-d93d5959dcdf container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:39:17.257: INFO: Waiting for pod pod-configmaps-7eb89823-c3f6-43e3-b9ab-d93d5959dcdf to disappear Apr 16 13:39:17.260: INFO: Pod pod-configmaps-7eb89823-c3f6-43e3-b9ab-d93d5959dcdf no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:17.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8724" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":443,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:17.294: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:39:17.321: INFO: Creating deployment "webserver-deployment" Apr 16 13:39:17.326: INFO: Waiting for observed generation 1 Apr 16 13:39:19.334: INFO: Waiting for all required pods to come up Apr 16 13:39:19.340: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Apr 16 13:39:23.350: INFO: Waiting for deployment "webserver-deployment" to complete Apr 16 13:39:23.355: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 16 13:39:23.363: INFO: Updating deployment webserver-deployment Apr 16 13:39:23.363: INFO: Waiting for observed generation 2 Apr 16 13:39:25.378: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 16 13:39:25.380: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 16 13:39:25.384: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 16 13:39:25.392: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 16 13:39:25.392: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 16 13:39:25.394: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 16 13:39:25.399: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 16 13:39:25.399: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 16 13:39:25.406: INFO: Updating deployment webserver-deployment Apr 16 13:39:25.406: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 16 13:39:25.421: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 16 13:39:25.425: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 16 13:39:25.441: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-369 7b235bf1-2c14-4de0-8685-8d2335f52bbe 7014 3 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f74c58 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2022-04-16 13:39:23 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-16 13:39:25 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 16 13:39:25.448: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-369 9a27a221-5f83-4258-b950-c9806964ff87 7005 3 2022-04-16 13:39:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7b235bf1-2c14-4de0-8685-8d2335f52bbe 0xc004c7ffe7 0xc004c7ffe8}] [] [{kube-controller-manager Update apps/v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b235bf1-2c14-4de0-8685-8d2335f52bbe\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005084088 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:39:25.448: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 16 13:39:25.448: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-369 084b9a5d-4bf4-4a07-8977-7278ef485b46 7002 3 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7b235bf1-2c14-4de0-8685-8d2335f52bbe 0xc0050840e7 0xc0050840e8}] [] [{kube-controller-manager Update apps/v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7b235bf1-2c14-4de0-8685-8d2335f52bbe\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:39:18 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005084178 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:39:25.473: INFO: Pod "webserver-deployment-566f96c878-46jz8" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-46jz8 webserver-deployment-566f96c878- deployment-369 351c9278-67f0-4373-8278-219122cf66d5 7023 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f75050 0xc004f75051}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-j42gj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j42gj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.473: INFO: Pod "webserver-deployment-566f96c878-7hxnh" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-7hxnh webserver-deployment-566f96c878- deployment-369 57bd8645-8ef1-4ffa-85df-ad52fe53145f 6987 0 2022-04-16 13:39:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f75197 0xc004f75198}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ptctj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ptctj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-e11j1x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.10,StartTime:2022-04-16 13:39:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.473: INFO: Pod "webserver-deployment-566f96c878-7nl9f" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-7nl9f webserver-deployment-566f96c878- deployment-369 c6da08d8-aa1f-4cf3-8687-5f6d9aca5ede 6989 0 2022-04-16 13:39:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f753a0 0xc004f753a1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.35\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nxv85,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxv85,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-jbucf3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.35,StartTime:2022-04-16 13:39:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.474: INFO: Pod "webserver-deployment-566f96c878-8jjns" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8jjns webserver-deployment-566f96c878- deployment-369 d53ca613-6d80-4612-801b-140e7c69e545 6993 0 2022-04-16 13:39:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f755b0 0xc004f755b1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.34\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mvs89,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvs89,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-jbucf3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.34,StartTime:2022-04-16 13:39:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.34,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.474: INFO: Pod "webserver-deployment-566f96c878-ghlvm" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-ghlvm webserver-deployment-566f96c878- deployment-369 ca448519-ab60-4d4a-b956-dc12c2af95e3 6996 0 2022-04-16 13:39:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f757c0 0xc004f757c1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.13\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8j5jg,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8j5jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.13,StartTime:2022-04-16 13:39:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.474: INFO: Pod "webserver-deployment-566f96c878-rf8lx" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-rf8lx webserver-deployment-566f96c878- deployment-369 4e59ab9e-d9c4-48e5-8260-35bd664a8f01 7030 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f759d0 0xc004f759d1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5jmzb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5jmzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.475: INFO: Pod "webserver-deployment-566f96c878-skb6x" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-skb6x webserver-deployment-566f96c878- deployment-369 bc5cc3e1-3ecd-4188-ac1c-a0f3ec7ef592 7025 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f75b30 0xc004f75b31}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bck29,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bck29,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.475: INFO: Pod "webserver-deployment-566f96c878-wmr6f" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-wmr6f webserver-deployment-566f96c878- deployment-369 539103f6-4be4-44dc-b13b-07641a486440 6981 0 2022-04-16 13:39:23 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 9a27a221-5f83-4258-b950-c9806964ff87 0xc004f75c90 0xc004f75c91}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:23 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9a27a221-5f83-4258-b950-c9806964ff87\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2gf2h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gf2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.28,StartTime:2022-04-16 13:39:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.475: INFO: Pod "webserver-deployment-5d9fdcc779-2zcvz" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-2zcvz webserver-deployment-5d9fdcc779- deployment-369 641575bc-e315-4f3e-8b86-4583a554a986 7015 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc004f75ea0 0xc004f75ea1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zk4zq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zk4zq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.476: INFO: Pod "webserver-deployment-5d9fdcc779-4sjjc" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4sjjc webserver-deployment-5d9fdcc779- deployment-369 d99427b4-7b8a-4ac2-ae08-b6230b990da7 6887 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc004f75ff0 0xc004f75ff1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qqzg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqzg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-e11j1x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.9,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://3660ba2b4e972a807f877df41b428a02ef5a1096b7dab4f7ceb267d8406bdd2a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.476: INFO: Pod "webserver-deployment-5d9fdcc779-4sxdd" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4sxdd webserver-deployment-5d9fdcc779- deployment-369 a457c05a-cf38-48e0-a3f3-d7406ace840e 7026 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc0050461d0 0xc0050461d1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-g8lk7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g8lk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-e11j1x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.476: INFO: Pod "webserver-deployment-5d9fdcc779-brnhr" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-brnhr webserver-deployment-5d9fdcc779- deployment-369 0b6eb6e7-a3d5-47ba-960d-e004802e1b6d 6837 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046320 0xc005046321}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.32\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mcmpb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcmpb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-jbucf3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.32,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://3b763552ee999193d5d74c063eb847b9ed83615639612d449c9604378fb6ca4c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.32,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.477: INFO: Pod "webserver-deployment-5d9fdcc779-g549r" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-g549r webserver-deployment-5d9fdcc779- deployment-369 822dee4b-bf4b-45b8-9d68-894ba6c94ce9 6842 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046500 0xc005046501}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.25\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h2lcb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h2lcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.25,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4e8b31e19dd8d6b18232760ce88a962ca184fa6c9fff2c59eeeaaac76aef1444,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.25,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.477: INFO: Pod "webserver-deployment-5d9fdcc779-h55jb" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-h55jb webserver-deployment-5d9fdcc779- deployment-369 dbced38b-7f6f-460a-b81e-ebbbfcd4d8b1 7020 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc0050466e0 0xc0050466e1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4f46r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4f46r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-jbucf3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.477: INFO: Pod "webserver-deployment-5d9fdcc779-j6wqq" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-j6wqq webserver-deployment-5d9fdcc779- deployment-369 0485f5e6-577b-4c1f-83bd-f7df46fbb8e2 7028 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046830 0xc005046831}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tfppl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tfppl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-jbucf3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.477: INFO: Pod "webserver-deployment-5d9fdcc779-ljfw7" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-ljfw7 webserver-deployment-5d9fdcc779- deployment-369 664dd88a-d9e3-42a0-8c6b-90e2b0c65435 7029 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046980 0xc005046981}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hrvwm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hrvwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.478: INFO: Pod "webserver-deployment-5d9fdcc779-mfz5b" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-mfz5b webserver-deployment-5d9fdcc779- deployment-369 24c6d303-e666-453c-8677-5bfb1c837c4d 7018 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046ad0 0xc005046ad1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dpw8z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dpw8z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.478: INFO: Pod "webserver-deployment-5d9fdcc779-prw7x" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-prw7x webserver-deployment-5d9fdcc779- deployment-369 120bd240-1611-4649-bc60-10f36747736d 6840 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046c07 0xc005046c08}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kqb5q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kqb5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.27,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://1302b5ceb90f5e2bd43d6750a33bbeddc46e4d043e7e0e1ec52342df726b2a60,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.478: INFO: Pod "webserver-deployment-5d9fdcc779-rq7cd" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-rq7cd webserver-deployment-5d9fdcc779- deployment-369 738a4290-54a8-43b2-abee-052e331a99cd 6830 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046de0 0xc005046de1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-28tjr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-28tjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.11,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://7f01dea52cafa0befbe1feb5df4ad2041d82e065748374e0923f426818786948,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.479: INFO: Pod "webserver-deployment-5d9fdcc779-wdl6n" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-wdl6n webserver-deployment-5d9fdcc779- deployment-369 89bc1f90-6e0e-4a52-885e-21133ff68700 6827 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005046fc0 0xc005046fc1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:18 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gbg4q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gbg4q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.12,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://5f71836e3f6cbf13ea2e3627cf0425b7f21c21f9e2896f419dafa4a331d70ab2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.479: INFO: Pod "webserver-deployment-5d9fdcc779-wktrv" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-wktrv webserver-deployment-5d9fdcc779- deployment-369 7212955c-ea3a-4cdd-bee8-7bd0775df94f 6884 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc0050471a0 0xc0050471a1}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:21 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.8\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xxgth,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xxgth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-e11j1x,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.8,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://6e57e26e27cd279d85570ecf929a82490047a71a92bcb4d5237b7b582754774c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.479: INFO: Pod "webserver-deployment-5d9fdcc779-xsbgm" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-xsbgm webserver-deployment-5d9fdcc779- deployment-369 c9247828-9412-469b-acc7-a26aa03b5751 7027 0 2022-04-16 13:39:25 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005047380 0xc005047381}] [] [{Go-http-client Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status} {kube-controller-manager Update v1 2022-04-16 13:39:25 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-c6h44,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c6h44,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-jbucf3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2022-04-16 13:39:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:39:25.480: INFO: Pod "webserver-deployment-5d9fdcc779-zc4bn" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-zc4bn webserver-deployment-5d9fdcc779- deployment-369 0c4f7eb4-e098-4d0a-901c-02e46d2e5765 6846 0 2022-04-16 13:39:17 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 084b9a5d-4bf4-4a07-8977-7278ef485b46 0xc005047530 0xc005047531}] [] [{kube-controller-manager Update v1 2022-04-16 13:39:17 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"084b9a5d-4bf4-4a07-8977-7278ef485b46\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:39:19 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zt2nl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zt2nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:39:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.26,StartTime:2022-04-16 13:39:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:39:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://f7d6facef8a7b0088ade873e313492fa30cfc1d88cd762025f0687dbdfbd9f75,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:25.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-369" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":23,"skipped":457,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:25.586: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating replica set "test-rs" that asks for more than the allowed pod quota Apr 16 13:39:25.629: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 16 13:39:30.634: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the replicaset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:30.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-5061" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":24,"skipped":483,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:30.697: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-02ab7b5f-dd18-4868-acc0-0ce7db904fa2 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:39:30.748: INFO: Waiting up to 5m0s for pod "pod-secrets-c05223d2-8300-4297-890f-41585ff21bb4" in namespace "secrets-1283" to be "Succeeded or Failed" Apr 16 13:39:30.751: INFO: Pod "pod-secrets-c05223d2-8300-4297-890f-41585ff21bb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.805894ms Apr 16 13:39:32.755: INFO: Pod "pod-secrets-c05223d2-8300-4297-890f-41585ff21bb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007534287s �[1mSTEP�[0m: Saw pod success Apr 16 13:39:32.755: INFO: Pod "pod-secrets-c05223d2-8300-4297-890f-41585ff21bb4" satisfied condition "Succeeded or Failed" Apr 16 13:39:32.758: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-secrets-c05223d2-8300-4297-890f-41585ff21bb4 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:39:32.772: INFO: Waiting for pod pod-secrets-c05223d2-8300-4297-890f-41585ff21bb4 to disappear Apr 16 13:39:32.774: INFO: Pod pod-secrets-c05223d2-8300-4297-890f-41585ff21bb4 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:39:32.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1283" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":493,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:38:29.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-2221 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a new StatefulSet Apr 16 13:38:29.848: INFO: Found 0 stateful pods, waiting for 3 Apr 16 13:38:39.854: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:38:39.854: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:38:39.854: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:38:39.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2221 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 13:38:40.030: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 16 13:38:40.030: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 13:38:40.030: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Apr 16 13:38:50.077: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Apr 16 13:39:00.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2221 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 13:39:00.326: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 16 13:39:00.327: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 13:39:00.327: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' �[1mSTEP�[0m: Rolling back to a previous revision Apr 16 13:39:20.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2221 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 13:39:20.726: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 16 13:39:20.726: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 13:39:20.726: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 13:39:30.765: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Rolling back update in reverse ordinal order Apr 16 13:39:40.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2221 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 13:39:40.943: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 16 13:39:40.943: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 13:39:40.943: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 16 13:39:50.972: INFO: Deleting all statefulset in ns statefulset-2221 Apr 16 13:39:50.976: INFO: Scaling statefulset ss2 to 0 Apr 16 13:40:00.995: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 13:40:00.998: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:01.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-2221" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":17,"skipped":325,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:01.032: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 16 13:40:01.089: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 16 13:40:01.099: INFO: waiting for watch events with expected annotations Apr 16 13:40:01.099: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:01.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-65" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":18,"skipped":328,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:01.151: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Apr 16 13:40:01.188: INFO: Waiting up to 5m0s for pod "pod-ebe27877-c1b9-4dc7-b601-d08447459590" in namespace "emptydir-8624" to be "Succeeded or Failed" Apr 16 13:40:01.191: INFO: Pod "pod-ebe27877-c1b9-4dc7-b601-d08447459590": Phase="Pending", Reason="", readiness=false. Elapsed: 2.985964ms Apr 16 13:40:03.196: INFO: Pod "pod-ebe27877-c1b9-4dc7-b601-d08447459590": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007374953s �[1mSTEP�[0m: Saw pod success Apr 16 13:40:03.196: INFO: Pod "pod-ebe27877-c1b9-4dc7-b601-d08447459590" satisfied condition "Succeeded or Failed" Apr 16 13:40:03.198: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-ebe27877-c1b9-4dc7-b601-d08447459590 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:40:03.211: INFO: Waiting for pod pod-ebe27877-c1b9-4dc7-b601-d08447459590 to disappear Apr 16 13:40:03.213: INFO: Pod pod-ebe27877-c1b9-4dc7-b601-d08447459590 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:03.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-8624" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":345,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:03.258: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:40:03.294: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-bbf0f583-3450-4571-8f6f-8e2b691c64d2" in namespace "security-context-test-3807" to be "Succeeded or Failed" Apr 16 13:40:03.300: INFO: Pod "alpine-nnp-false-bbf0f583-3450-4571-8f6f-8e2b691c64d2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.896923ms Apr 16 13:40:05.305: INFO: Pod "alpine-nnp-false-bbf0f583-3450-4571-8f6f-8e2b691c64d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009929159s Apr 16 13:40:05.305: INFO: Pod "alpine-nnp-false-bbf0f583-3450-4571-8f6f-8e2b691c64d2" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:05.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-3807" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":370,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:39:32.801: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-d0b2c4ed-97ac-474a-9184-6569b0f8b4a6 �[1mSTEP�[0m: Creating the pod Apr 16 13:39:32.870: INFO: The status of Pod pod-configmaps-75078b21-c043-44e9-be31-8704c66202ed is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:39:34.874: INFO: The status of Pod pod-configmaps-75078b21-c043-44e9-be31-8704c66202ed is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:39:36.875: INFO: The status of Pod pod-configmaps-75078b21-c043-44e9-be31-8704c66202ed is Running (Ready = true) �[1mSTEP�[0m: Updating configmap configmap-test-upd-d0b2c4ed-97ac-474a-9184-6569b0f8b4a6 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:43.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6283" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":505,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:36:50.356: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod test-webserver-6f6bd093-6564-43ba-b172-aec1f8505a27 in namespace container-probe-5739 Apr 16 13:36:52.403: INFO: Started pod test-webserver-6f6bd093-6564-43ba-b172-aec1f8505a27 in namespace container-probe-5739 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 16 13:36:52.406: INFO: Initial restart count of pod test-webserver-6f6bd093-6564-43ba-b172-aec1f8505a27 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:53.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5739" for this suite. �[32m• [SLOW TEST:242.838 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":91,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:43.179: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics Apr 16 13:40:53.267: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb is Running (Ready = true) Apr 16 13:40:53.390: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:53.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3305" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":27,"skipped":512,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:53.403: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:40:53.440: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e16f34a2-2f34-4528-83f0-ff3fc1dc0887" in namespace "downward-api-3109" to be "Succeeded or Failed" Apr 16 13:40:53.443: INFO: Pod "downwardapi-volume-e16f34a2-2f34-4528-83f0-ff3fc1dc0887": Phase="Pending", Reason="", readiness=false. Elapsed: 3.348902ms Apr 16 13:40:55.449: INFO: Pod "downwardapi-volume-e16f34a2-2f34-4528-83f0-ff3fc1dc0887": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009122993s �[1mSTEP�[0m: Saw pod success Apr 16 13:40:55.449: INFO: Pod "downwardapi-volume-e16f34a2-2f34-4528-83f0-ff3fc1dc0887" satisfied condition "Succeeded or Failed" Apr 16 13:40:55.452: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod downwardapi-volume-e16f34a2-2f34-4528-83f0-ff3fc1dc0887 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:40:55.464: INFO: Waiting for pod downwardapi-volume-e16f34a2-2f34-4528-83f0-ff3fc1dc0887 to disappear Apr 16 13:40:55.467: INFO: Pod downwardapi-volume-e16f34a2-2f34-4528-83f0-ff3fc1dc0887 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:55.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3109" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":512,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:55.485: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:55.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3388" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":29,"skipped":517,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:55.538: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating pod Apr 16 13:40:55.572: INFO: The status of Pod pod-hostip-d9d6ee54-7425-48df-993b-1f3e21ccb317 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:40:57.576: INFO: The status of Pod pod-hostip-d9d6ee54-7425-48df-993b-1f3e21ccb317 is Running (Ready = true) Apr 16 13:40:57.582: INFO: Pod pod-hostip-d9d6ee54-7425-48df-993b-1f3e21ccb317 has hostIP: 172.18.0.5 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:57.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-365" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":519,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:53.203: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:40:54.034: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Apr 16 13:40:56.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 40, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 40, 54, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 40, 54, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 40, 54, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:40:59.062: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:40:59.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5231" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5231-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":5,"skipped":96,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:59.293: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:40:59.318: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 16 13:40:59.325: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 16 13:41:04.330: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 16 13:41:04.330: INFO: Creating deployment "test-rolling-update-deployment" Apr 16 13:41:04.334: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 16 13:41:04.341: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 16 13:41:06.349: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 16 13:41:06.351: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 16 13:41:06.360: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-376 163c266a-0919-4d5f-a355-1e4ae0cde4d3 8341 1 2022-04-16 13:41:04 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-04-16 13:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:41:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002ea6f38 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-16 13:41:04 +0000 UTC,LastTransitionTime:2022-04-16 13:41:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-796dbc4547" has successfully progressed.,LastUpdateTime:2022-04-16 13:41:05 +0000 UTC,LastTransitionTime:2022-04-16 13:41:04 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 16 13:41:06.363: INFO: New ReplicaSet "test-rolling-update-deployment-796dbc4547" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-796dbc4547 deployment-376 894f7b9c-73a0-4853-93ad-56bc20c6693f 8331 1 2022-04-16 13:41:04 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 163c266a-0919-4d5f-a355-1e4ae0cde4d3 0xc00365d547 0xc00365d548}] [] [{kube-controller-manager Update apps/v1 2022-04-16 13:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"163c266a-0919-4d5f-a355-1e4ae0cde4d3\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:41:05 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 796dbc4547,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00365d5f8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:41:06.363: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 16 13:41:06.363: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-376 ac0519c1-862b-48ae-88e0-e64b1fcb3c36 8340 2 2022-04-16 13:40:59 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 163c266a-0919-4d5f-a355-1e4ae0cde4d3 0xc00365d41f 0xc00365d430}] [] [{e2e.test Update apps/v1 2022-04-16 13:40:59 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:41:05 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"163c266a-0919-4d5f-a355-1e4ae0cde4d3\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:41:05 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00365d4e8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:41:06.367: INFO: Pod "test-rolling-update-deployment-796dbc4547-mlnxp" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-796dbc4547-mlnxp test-rolling-update-deployment-796dbc4547- deployment-376 5db6460f-715c-439b-b69c-9a90d5d16128 8330 0 2022-04-16 13:41:04 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-796dbc4547 894f7b9c-73a0-4853-93ad-56bc20c6693f 0xc00365da27 0xc00365da28}] [] [{kube-controller-manager Update v1 2022-04-16 13:41:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"894f7b9c-73a0-4853-93ad-56bc20c6693f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:41:05 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.40\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2pvp2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2pvp2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:41:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:41:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:41:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:41:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.40,StartTime:2022-04-16 13:41:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:41:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://1b2bac174b3d66e0ff5f608b9d012664e4e7093374db5bbabc8e124ee0d14dc8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:06.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-376" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":166,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:06.487: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: starting an echo server on multiple ports �[1mSTEP�[0m: creating replication controller proxy-service-fmpft in namespace proxy-9134 I0416 13:41:06.534323 18 runners.go:193] Created replication controller with name: proxy-service-fmpft, namespace: proxy-9134, replica count: 1 I0416 13:41:07.585188 18 runners.go:193] proxy-service-fmpft Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 13:41:08.585764 18 runners.go:193] proxy-service-fmpft Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 13:41:08.588: INFO: setup took 2.074769474s, starting test cases �[1mSTEP�[0m: running 16 cases, 20 attempts per case, 320 total attempts Apr 16 13:41:08.600: INFO: (0) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 11.64094ms) Apr 16 13:41:08.600: INFO: (0) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 11.641813ms) Apr 16 13:41:08.600: INFO: (0) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 11.783939ms) Apr 16 13:41:08.600: INFO: (0) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 11.721102ms) Apr 16 13:41:08.601: INFO: (0) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 12.223845ms) Apr 16 13:41:08.601: INFO: (0) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 11.9885ms) Apr 16 13:41:08.602: INFO: (0) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 13.689646ms) Apr 16 13:41:08.602: INFO: (0) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 13.6224ms) Apr 16 13:41:08.604: INFO: (0) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 15.556566ms) Apr 16 13:41:08.608: INFO: (0) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 19.072409ms) Apr 16 13:41:08.608: INFO: (0) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 19.953173ms) Apr 16 13:41:08.609: INFO: (0) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 19.772427ms) Apr 16 13:41:08.609: INFO: (0) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 19.814196ms) Apr 16 13:41:08.609: INFO: (0) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 20.015311ms) Apr 16 13:41:08.609: INFO: (0) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 20.029399ms) Apr 16 13:41:08.609: INFO: (0) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 19.820254ms) Apr 16 13:41:08.614: INFO: (1) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 5.013789ms) Apr 16 13:41:08.616: INFO: (1) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 6.938597ms) Apr 16 13:41:08.621: INFO: (1) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 12.646052ms) Apr 16 13:41:08.622: INFO: (1) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 12.825545ms) Apr 16 13:41:08.622: INFO: (1) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 12.923632ms) Apr 16 13:41:08.622: INFO: (1) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 12.941391ms) Apr 16 13:41:08.623: INFO: (1) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 13.880229ms) Apr 16 13:41:08.623: INFO: (1) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 14.264047ms) Apr 16 13:41:08.623: INFO: (1) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 14.269073ms) Apr 16 13:41:08.623: INFO: (1) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 14.242775ms) Apr 16 13:41:08.623: INFO: (1) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 14.345703ms) Apr 16 13:41:08.623: INFO: (1) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 14.416664ms) Apr 16 13:41:08.623: INFO: (1) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 14.329465ms) Apr 16 13:41:08.624: INFO: (1) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 14.710818ms) Apr 16 13:41:08.625: INFO: (1) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 15.63834ms) Apr 16 13:41:08.625: INFO: (1) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 15.617644ms) Apr 16 13:41:08.629: INFO: (2) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 4.192629ms) Apr 16 13:41:08.629: INFO: (2) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 4.197676ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 8.861124ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 8.999323ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 8.899155ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 9.405206ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 9.441577ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 9.427574ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 9.339089ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 9.610973ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 9.444557ms) Apr 16 13:41:08.634: INFO: (2) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 9.525769ms) Apr 16 13:41:08.638: INFO: (2) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 12.930996ms) Apr 16 13:41:08.638: INFO: (2) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 13.012149ms) Apr 16 13:41:08.638: INFO: (2) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 13.076076ms) Apr 16 13:41:08.639: INFO: (2) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 14.234045ms) Apr 16 13:41:08.647: INFO: (3) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 7.343151ms) Apr 16 13:41:08.647: INFO: (3) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 7.779949ms) Apr 16 13:41:08.647: INFO: (3) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 7.964861ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 8.426326ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.559629ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 8.397552ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.633094ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 8.59434ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 8.677298ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 8.804856ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 8.818685ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 9.001122ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 8.811918ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 8.817862ms) Apr 16 13:41:08.648: INFO: (3) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 8.982205ms) Apr 16 13:41:08.649: INFO: (3) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 9.700691ms) Apr 16 13:41:08.655: INFO: (4) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 6.418391ms) Apr 16 13:41:08.658: INFO: (4) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 8.874031ms) Apr 16 13:41:08.659: INFO: (4) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 9.733266ms) Apr 16 13:41:08.659: INFO: (4) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 9.868444ms) Apr 16 13:41:08.659: INFO: (4) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 10.049786ms) Apr 16 13:41:08.660: INFO: (4) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 10.553463ms) Apr 16 13:41:08.661: INFO: (4) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 11.653031ms) Apr 16 13:41:08.661: INFO: (4) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 12.068886ms) Apr 16 13:41:08.661: INFO: (4) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 12.076257ms) Apr 16 13:41:08.661: INFO: (4) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 12.053537ms) Apr 16 13:41:08.661: INFO: (4) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 12.315654ms) Apr 16 13:41:08.662: INFO: (4) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 12.46681ms) Apr 16 13:41:08.662: INFO: (4) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 12.464476ms) Apr 16 13:41:08.662: INFO: (4) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 12.793675ms) Apr 16 13:41:08.662: INFO: (4) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 12.687409ms) Apr 16 13:41:08.662: INFO: (4) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 12.960834ms) Apr 16 13:41:08.666: INFO: (5) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 4.099874ms) Apr 16 13:41:08.666: INFO: (5) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 4.302347ms) Apr 16 13:41:08.669: INFO: (5) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 6.767417ms) Apr 16 13:41:08.669: INFO: (5) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 6.81956ms) Apr 16 13:41:08.669: INFO: (5) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 6.921517ms) Apr 16 13:41:08.669: INFO: (5) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 6.821921ms) Apr 16 13:41:08.670: INFO: (5) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 7.190318ms) Apr 16 13:41:08.670: INFO: (5) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 7.262404ms) Apr 16 13:41:08.670: INFO: (5) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 7.281155ms) Apr 16 13:41:08.670: INFO: (5) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 7.705051ms) Apr 16 13:41:08.670: INFO: (5) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 7.957862ms) Apr 16 13:41:08.670: INFO: (5) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 8.167329ms) Apr 16 13:41:08.671: INFO: (5) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 8.302439ms) Apr 16 13:41:08.671: INFO: (5) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.81985ms) Apr 16 13:41:08.671: INFO: (5) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 8.938213ms) Apr 16 13:41:08.671: INFO: (5) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 8.832456ms) Apr 16 13:41:08.677: INFO: (6) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 5.899543ms) Apr 16 13:41:08.677: INFO: (6) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 5.852438ms) Apr 16 13:41:08.677: INFO: (6) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 5.891297ms) Apr 16 13:41:08.677: INFO: (6) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 5.870803ms) Apr 16 13:41:08.677: INFO: (6) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 5.870782ms) Apr 16 13:41:08.678: INFO: (6) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 6.363455ms) Apr 16 13:41:08.678: INFO: (6) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 7.180837ms) Apr 16 13:41:08.679: INFO: (6) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.664242ms) Apr 16 13:41:08.679: INFO: (6) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 7.631371ms) Apr 16 13:41:08.679: INFO: (6) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 7.733904ms) Apr 16 13:41:08.679: INFO: (6) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 7.898578ms) Apr 16 13:41:08.679: INFO: (6) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 7.841593ms) Apr 16 13:41:08.679: INFO: (6) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 7.816308ms) Apr 16 13:41:08.679: INFO: (6) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 7.852571ms) Apr 16 13:41:08.680: INFO: (6) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 8.23566ms) Apr 16 13:41:08.680: INFO: (6) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 8.397596ms) Apr 16 13:41:08.684: INFO: (7) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 4.337268ms) Apr 16 13:41:08.687: INFO: (7) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 6.907286ms) Apr 16 13:41:08.687: INFO: (7) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 6.997706ms) Apr 16 13:41:08.687: INFO: (7) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 6.928353ms) Apr 16 13:41:08.687: INFO: (7) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 6.959088ms) Apr 16 13:41:08.687: INFO: (7) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 6.917229ms) Apr 16 13:41:08.688: INFO: (7) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 7.70705ms) Apr 16 13:41:08.688: INFO: (7) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.50663ms) Apr 16 13:41:08.688: INFO: (7) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.705915ms) Apr 16 13:41:08.689: INFO: (7) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 8.328203ms) Apr 16 13:41:08.689: INFO: (7) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 9.101291ms) Apr 16 13:41:08.689: INFO: (7) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 8.953574ms) Apr 16 13:41:08.690: INFO: (7) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 9.617363ms) Apr 16 13:41:08.690: INFO: (7) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 9.787894ms) Apr 16 13:41:08.690: INFO: (7) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 9.796152ms) Apr 16 13:41:08.691: INFO: (7) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 10.34995ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 5.959743ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 6.775586ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 6.537697ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 5.684022ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.179567ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 6.90973ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 7.236339ms) Apr 16 13:41:08.698: INFO: (8) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 6.389964ms) Apr 16 13:41:08.699: INFO: (8) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 6.234535ms) Apr 16 13:41:08.699: INFO: (8) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 6.856155ms) Apr 16 13:41:08.701: INFO: (8) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 9.830669ms) Apr 16 13:41:08.701: INFO: (8) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 9.341138ms) Apr 16 13:41:08.701: INFO: (8) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 9.896906ms) Apr 16 13:41:08.701: INFO: (8) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 9.525038ms) Apr 16 13:41:08.701: INFO: (8) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 9.106613ms) Apr 16 13:41:08.702: INFO: (8) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 9.464552ms) Apr 16 13:41:08.708: INFO: (9) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 6.50461ms) Apr 16 13:41:08.708: INFO: (9) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 6.539983ms) Apr 16 13:41:08.708: INFO: (9) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 6.66355ms) Apr 16 13:41:08.708: INFO: (9) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 6.832074ms) Apr 16 13:41:08.708: INFO: (9) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 6.693552ms) Apr 16 13:41:08.709: INFO: (9) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 6.826958ms) Apr 16 13:41:08.709: INFO: (9) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 6.936897ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 9.079385ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 9.018494ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 9.091018ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 8.949848ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 9.037798ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 9.182889ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 9.156205ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 9.027477ms) Apr 16 13:41:08.711: INFO: (9) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 9.094012ms) Apr 16 13:41:08.715: INFO: (10) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 3.958727ms) Apr 16 13:41:08.716: INFO: (10) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 4.803516ms) Apr 16 13:41:08.716: INFO: (10) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 5.292886ms) Apr 16 13:41:08.718: INFO: (10) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 7.13454ms) Apr 16 13:41:08.718: INFO: (10) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 7.276498ms) Apr 16 13:41:08.719: INFO: (10) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 8.418624ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 8.454393ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 8.564219ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 8.422911ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.850121ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 9.045536ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 8.952721ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 9.014482ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 9.158587ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 9.116109ms) Apr 16 13:41:08.720: INFO: (10) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 9.016522ms) Apr 16 13:41:08.726: INFO: (11) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 5.966767ms) Apr 16 13:41:08.726: INFO: (11) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 5.966613ms) Apr 16 13:41:08.726: INFO: (11) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 5.779128ms) Apr 16 13:41:08.727: INFO: (11) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 6.230422ms) Apr 16 13:41:08.727: INFO: (11) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 6.0499ms) Apr 16 13:41:08.727: INFO: (11) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 6.756161ms) Apr 16 13:41:08.728: INFO: (11) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.303451ms) Apr 16 13:41:08.728: INFO: (11) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 6.982035ms) Apr 16 13:41:08.728: INFO: (11) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 6.999176ms) Apr 16 13:41:08.728: INFO: (11) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 7.224466ms) Apr 16 13:41:08.728: INFO: (11) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 7.153597ms) Apr 16 13:41:08.729: INFO: (11) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 8.371785ms) Apr 16 13:41:08.730: INFO: (11) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 8.982132ms) Apr 16 13:41:08.730: INFO: (11) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 9.070686ms) Apr 16 13:41:08.730: INFO: (11) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 9.42203ms) Apr 16 13:41:08.730: INFO: (11) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 9.34344ms) Apr 16 13:41:08.735: INFO: (12) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 5.135347ms) Apr 16 13:41:08.736: INFO: (12) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 6.349934ms) Apr 16 13:41:08.736: INFO: (12) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 6.24774ms) Apr 16 13:41:08.736: INFO: (12) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 6.385287ms) Apr 16 13:41:08.736: INFO: (12) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 6.447569ms) Apr 16 13:41:08.736: INFO: (12) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 6.368622ms) Apr 16 13:41:08.737: INFO: (12) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 6.430395ms) Apr 16 13:41:08.738: INFO: (12) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 7.501102ms) Apr 16 13:41:08.738: INFO: (12) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 7.756494ms) Apr 16 13:41:08.738: INFO: (12) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 7.802898ms) Apr 16 13:41:08.738: INFO: (12) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 7.725763ms) Apr 16 13:41:08.738: INFO: (12) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 7.760792ms) Apr 16 13:41:08.740: INFO: (12) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 9.744016ms) Apr 16 13:41:08.740: INFO: (12) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 9.876544ms) Apr 16 13:41:08.740: INFO: (12) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 10.192261ms) Apr 16 13:41:08.740: INFO: (12) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 10.205652ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.42022ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.213802ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 8.814541ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 8.446873ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 8.35293ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 8.070882ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 8.167052ms) Apr 16 13:41:08.749: INFO: (13) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 8.038073ms) Apr 16 13:41:08.751: INFO: (13) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 9.975207ms) Apr 16 13:41:08.751: INFO: (13) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 10.096968ms) Apr 16 13:41:08.751: INFO: (13) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 9.876081ms) Apr 16 13:41:08.751: INFO: (13) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 10.800385ms) Apr 16 13:41:08.751: INFO: (13) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 10.332628ms) Apr 16 13:41:08.751: INFO: (13) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 10.546331ms) Apr 16 13:41:08.752: INFO: (13) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 10.197277ms) Apr 16 13:41:08.753: INFO: (13) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 12.157916ms) Apr 16 13:41:08.760: INFO: (14) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 6.889701ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.273508ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 7.459263ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 7.710165ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 7.820625ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.705132ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 7.858704ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 7.806311ms) Apr 16 13:41:08.761: INFO: (14) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 7.8719ms) Apr 16 13:41:08.762: INFO: (14) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.54006ms) Apr 16 13:41:08.763: INFO: (14) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 9.25865ms) Apr 16 13:41:08.763: INFO: (14) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 9.41441ms) Apr 16 13:41:08.763: INFO: (14) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 9.256028ms) Apr 16 13:41:08.763: INFO: (14) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 9.452504ms) Apr 16 13:41:08.763: INFO: (14) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 9.303104ms) Apr 16 13:41:08.763: INFO: (14) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 9.269804ms) Apr 16 13:41:08.766: INFO: (15) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 3.400344ms) Apr 16 13:41:08.769: INFO: (15) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 5.654631ms) Apr 16 13:41:08.770: INFO: (15) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 6.972378ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 11.713773ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 11.758667ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 12.02177ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 11.489168ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 11.947366ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 11.47169ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 11.682921ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 11.564573ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 11.751111ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 11.830978ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 12.042737ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 12.018342ms) Apr 16 13:41:08.775: INFO: (15) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 11.907665ms) Apr 16 13:41:08.780: INFO: (16) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 4.867497ms) Apr 16 13:41:08.780: INFO: (16) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 4.751822ms) Apr 16 13:41:08.783: INFO: (16) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 7.588073ms) Apr 16 13:41:08.783: INFO: (16) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 7.926701ms) Apr 16 13:41:08.784: INFO: (16) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.864727ms) Apr 16 13:41:08.784: INFO: (16) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 8.024842ms) Apr 16 13:41:08.784: INFO: (16) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 7.863969ms) Apr 16 13:41:08.784: INFO: (16) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 7.925383ms) Apr 16 13:41:08.784: INFO: (16) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 8.031388ms) Apr 16 13:41:08.784: INFO: (16) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 7.894243ms) Apr 16 13:41:08.784: INFO: (16) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.525885ms) Apr 16 13:41:08.787: INFO: (16) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 11.096022ms) Apr 16 13:41:08.787: INFO: (16) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 11.27442ms) Apr 16 13:41:08.787: INFO: (16) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 11.148644ms) Apr 16 13:41:08.787: INFO: (16) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 11.229373ms) Apr 16 13:41:08.787: INFO: (16) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 11.636063ms) Apr 16 13:41:08.796: INFO: (17) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 8.168499ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 9.243705ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 9.171857ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 9.375051ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 9.076895ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 9.338419ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 9.256307ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 9.229547ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 9.22574ms) Apr 16 13:41:08.797: INFO: (17) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 9.140302ms) Apr 16 13:41:08.805: INFO: (17) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 17.37254ms) Apr 16 13:41:08.805: INFO: (17) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 16.692589ms) Apr 16 13:41:08.805: INFO: (17) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 16.908168ms) Apr 16 13:41:08.805: INFO: (17) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 17.160776ms) Apr 16 13:41:08.805: INFO: (17) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 16.795927ms) Apr 16 13:41:08.805: INFO: (17) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 17.657186ms) Apr 16 13:41:08.813: INFO: (18) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 7.234834ms) Apr 16 13:41:08.813: INFO: (18) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 7.475145ms) Apr 16 13:41:08.813: INFO: (18) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 7.427089ms) Apr 16 13:41:08.823: INFO: (18) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 17.395034ms) Apr 16 13:41:08.823: INFO: (18) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 17.46163ms) Apr 16 13:41:08.823: INFO: (18) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 17.652343ms) Apr 16 13:41:08.823: INFO: (18) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 17.508856ms) Apr 16 13:41:08.823: INFO: (18) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 17.509926ms) Apr 16 13:41:08.823: INFO: (18) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 17.620255ms) Apr 16 13:41:08.823: INFO: (18) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 18.000036ms) Apr 16 13:41:08.824: INFO: (18) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 18.40381ms) Apr 16 13:41:08.824: INFO: (18) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 18.408421ms) Apr 16 13:41:08.824: INFO: (18) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 18.341459ms) Apr 16 13:41:08.824: INFO: (18) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 18.4895ms) Apr 16 13:41:08.824: INFO: (18) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 18.852325ms) Apr 16 13:41:08.824: INFO: (18) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 18.848445ms) Apr 16 13:41:08.831: INFO: (19) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:462/proxy/: tls qux (200; 7.146642ms) Apr 16 13:41:08.833: INFO: (19) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 8.391047ms) Apr 16 13:41:08.833: INFO: (19) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:460/proxy/: tls baz (200; 8.804321ms) Apr 16 13:41:08.834: INFO: (19) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 9.029933ms) Apr 16 13:41:08.834: INFO: (19) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:162/proxy/: bar (200; 9.425462ms) Apr 16 13:41:08.834: INFO: (19) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:160/proxy/: foo (200; 9.547793ms) Apr 16 13:41:08.834: INFO: (19) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g/proxy/rewriteme">test</a> (200; 9.927641ms) Apr 16 13:41:08.835: INFO: (19) /api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/http:proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">... (200; 10.69114ms) Apr 16 13:41:08.835: INFO: (19) /api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/https:proxy-service-fmpft-ckz5g:443/proxy/tlsrewritem... (200; 10.970024ms) Apr 16 13:41:08.835: INFO: (19) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname1/proxy/: tls baz (200; 11.02091ms) Apr 16 13:41:08.835: INFO: (19) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname2/proxy/: bar (200; 10.956181ms) Apr 16 13:41:08.836: INFO: (19) /api/v1/namespaces/proxy-9134/services/https:proxy-service-fmpft:tlsportname2/proxy/: tls qux (200; 11.320407ms) Apr 16 13:41:08.836: INFO: (19) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname2/proxy/: bar (200; 11.718319ms) Apr 16 13:41:08.837: INFO: (19) /api/v1/namespaces/proxy-9134/services/proxy-service-fmpft:portname1/proxy/: foo (200; 12.005725ms) Apr 16 13:41:08.837: INFO: (19) /api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/: <a href="/api/v1/namespaces/proxy-9134/pods/proxy-service-fmpft-ckz5g:1080/proxy/rewriteme">test<... (200; 11.959784ms) Apr 16 13:41:08.837: INFO: (19) /api/v1/namespaces/proxy-9134/services/http:proxy-service-fmpft:portname1/proxy/: foo (200; 12.210835ms) �[1mSTEP�[0m: deleting ReplicationController proxy-service-fmpft in namespace proxy-9134, will wait for the garbage collector to delete the pods Apr 16 13:41:08.896: INFO: Deleting ReplicationController proxy-service-fmpft took: 5.371082ms Apr 16 13:41:08.997: INFO: Terminating ReplicationController proxy-service-fmpft pods took: 101.255379ms [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:11.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-9134" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":-1,"completed":7,"skipped":243,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:11.379: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:41:11.419: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7f93c3c7-a748-47d8-9a1e-4bab88230410" in namespace "projected-6996" to be "Succeeded or Failed" Apr 16 13:41:11.424: INFO: Pod "downwardapi-volume-7f93c3c7-a748-47d8-9a1e-4bab88230410": Phase="Pending", Reason="", readiness=false. Elapsed: 4.658951ms Apr 16 13:41:13.428: INFO: Pod "downwardapi-volume-7f93c3c7-a748-47d8-9a1e-4bab88230410": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008092978s �[1mSTEP�[0m: Saw pod success Apr 16 13:41:13.428: INFO: Pod "downwardapi-volume-7f93c3c7-a748-47d8-9a1e-4bab88230410" satisfied condition "Succeeded or Failed" Apr 16 13:41:13.431: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x pod downwardapi-volume-7f93c3c7-a748-47d8-9a1e-4bab88230410 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:41:13.451: INFO: Waiting for pod downwardapi-volume-7f93c3c7-a748-47d8-9a1e-4bab88230410 to disappear Apr 16 13:41:13.454: INFO: Pod downwardapi-volume-7f93c3c7-a748-47d8-9a1e-4bab88230410 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:13.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6996" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":290,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:13.468: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test substitution in container's args Apr 16 13:41:13.502: INFO: Waiting up to 5m0s for pod "var-expansion-3e4cfb87-592e-4520-8db2-0d52fcf6aa0f" in namespace "var-expansion-4724" to be "Succeeded or Failed" Apr 16 13:41:13.505: INFO: Pod "var-expansion-3e4cfb87-592e-4520-8db2-0d52fcf6aa0f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.834741ms Apr 16 13:41:15.511: INFO: Pod "var-expansion-3e4cfb87-592e-4520-8db2-0d52fcf6aa0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009245403s �[1mSTEP�[0m: Saw pod success Apr 16 13:41:15.511: INFO: Pod "var-expansion-3e4cfb87-592e-4520-8db2-0d52fcf6aa0f" satisfied condition "Succeeded or Failed" Apr 16 13:41:15.514: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x pod var-expansion-3e4cfb87-592e-4520-8db2-0d52fcf6aa0f container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:41:15.531: INFO: Waiting for pod var-expansion-3e4cfb87-592e-4520-8db2-0d52fcf6aa0f to disappear Apr 16 13:41:15.533: INFO: Pod var-expansion-3e4cfb87-592e-4520-8db2-0d52fcf6aa0f no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:15.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-4724" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":292,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:05.378: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-8196 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a new StatefulSet Apr 16 13:40:05.425: INFO: Found 0 stateful pods, waiting for 3 Apr 16 13:40:15.431: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:40:15.431: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:40:15.431: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Apr 16 13:40:15.459: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Apr 16 13:40:25.493: INFO: Updating stateful set ss2 Apr 16 13:40:25.498: INFO: Waiting for Pod statefulset-8196/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Apr 16 13:40:35.533: INFO: Found 1 stateful pods, waiting for 3 Apr 16 13:40:45.538: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:40:45.538: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:40:45.538: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Apr 16 13:40:45.563: INFO: Updating stateful set ss2 Apr 16 13:40:45.574: INFO: Waiting for Pod statefulset-8196/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Apr 16 13:40:55.602: INFO: Updating stateful set ss2 Apr 16 13:40:55.611: INFO: Waiting for StatefulSet statefulset-8196/ss2 to complete update Apr 16 13:40:55.611: INFO: Waiting for Pod statefulset-8196/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 16 13:41:05.619: INFO: Deleting all statefulset in ns statefulset-8196 Apr 16 13:41:05.621: INFO: Scaling statefulset ss2 to 0 Apr 16 13:41:15.638: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 13:41:15.641: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:15.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8196" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":21,"skipped":412,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:15.610: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:41:15.649: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e341356d-475f-4d27-b59a-1b4940b4300b" in namespace "security-context-test-2052" to be "Succeeded or Failed" Apr 16 13:41:15.656: INFO: Pod "busybox-user-65534-e341356d-475f-4d27-b59a-1b4940b4300b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.45965ms Apr 16 13:41:17.661: INFO: Pod "busybox-user-65534-e341356d-475f-4d27-b59a-1b4940b4300b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011553303s Apr 16 13:41:17.661: INFO: Pod "busybox-user-65534-e341356d-475f-4d27-b59a-1b4940b4300b" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:17.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-2052" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":344,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:40:57.596: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-9602 [It] should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:40:57.644: INFO: Found 0 stateful pods, waiting for 1 Apr 16 13:41:07.650: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: patching the StatefulSet Apr 16 13:41:07.669: INFO: Found 1 stateful pods, waiting for 2 Apr 16 13:41:17.680: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:41:17.680: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Listing all StatefulSets �[1mSTEP�[0m: Delete all of the StatefulSets �[1mSTEP�[0m: Verify that StatefulSets have been deleted [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 16 13:41:17.705: INFO: Deleting all statefulset in ns statefulset-9602 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:17.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-9602" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":31,"skipped":520,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:15.705: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1537 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 16 13:41:15.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3849 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' Apr 16 13:41:15.813: INFO: stderr: "" Apr 16 13:41:15.813: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541 Apr 16 13:41:15.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3849 delete pods e2e-test-httpd-pod' Apr 16 13:41:18.142: INFO: stderr: "" Apr 16 13:41:18.142: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:18.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3849" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":22,"skipped":433,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:17.865: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-3fc25287-fd5f-4ed2-bf3d-b160b69e7752 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:41:17.919: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6f6f6b47-fbba-464a-91b7-d8b3ff09f8ca" in namespace "projected-7096" to be "Succeeded or Failed" Apr 16 13:41:17.924: INFO: Pod "pod-projected-secrets-6f6f6b47-fbba-464a-91b7-d8b3ff09f8ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.924004ms Apr 16 13:41:19.928: INFO: Pod "pod-projected-secrets-6f6f6b47-fbba-464a-91b7-d8b3ff09f8ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009348164s �[1mSTEP�[0m: Saw pod success Apr 16 13:41:19.928: INFO: Pod "pod-projected-secrets-6f6f6b47-fbba-464a-91b7-d8b3ff09f8ca" satisfied condition "Succeeded or Failed" Apr 16 13:41:19.931: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x pod pod-projected-secrets-6f6f6b47-fbba-464a-91b7-d8b3ff09f8ca container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:41:19.946: INFO: Waiting for pod pod-projected-secrets-6f6f6b47-fbba-464a-91b7-d8b3ff09f8ca to disappear Apr 16 13:41:19.949: INFO: Pod pod-projected-secrets-6f6f6b47-fbba-464a-91b7-d8b3ff09f8ca no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:19.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7096" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":565,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:19.970: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-3ded46e7-9a8a-4c67-9659-dd60b13164c1 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:41:20.024: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7d505af3-d7b5-4fb5-a7e1-3260c6181373" in namespace "projected-2669" to be "Succeeded or Failed" Apr 16 13:41:20.028: INFO: Pod "pod-projected-configmaps-7d505af3-d7b5-4fb5-a7e1-3260c6181373": Phase="Pending", Reason="", readiness=false. Elapsed: 3.392937ms Apr 16 13:41:22.032: INFO: Pod "pod-projected-configmaps-7d505af3-d7b5-4fb5-a7e1-3260c6181373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007831229s �[1mSTEP�[0m: Saw pod success Apr 16 13:41:22.032: INFO: Pod "pod-projected-configmaps-7d505af3-d7b5-4fb5-a7e1-3260c6181373" satisfied condition "Succeeded or Failed" Apr 16 13:41:22.035: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x pod pod-projected-configmaps-7d505af3-d7b5-4fb5-a7e1-3260c6181373 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:41:22.049: INFO: Waiting for pod pod-projected-configmaps-7d505af3-d7b5-4fb5-a7e1-3260c6181373 to disappear Apr 16 13:41:22.052: INFO: Pod pod-projected-configmaps-7d505af3-d7b5-4fb5-a7e1-3260c6181373 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:22.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2669" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":570,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:22.126: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating secret secrets-4838/secret-test-e46c4fe0-0066-44b1-925b-6403580d28b6 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:41:22.170: INFO: Waiting up to 5m0s for pod "pod-configmaps-531154bf-2834-439f-b8e0-229ce4a61b22" in namespace "secrets-4838" to be "Succeeded or Failed" Apr 16 13:41:22.173: INFO: Pod "pod-configmaps-531154bf-2834-439f-b8e0-229ce4a61b22": Phase="Pending", Reason="", readiness=false. Elapsed: 3.905993ms Apr 16 13:41:24.176: INFO: Pod "pod-configmaps-531154bf-2834-439f-b8e0-229ce4a61b22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006819032s �[1mSTEP�[0m: Saw pod success Apr 16 13:41:24.176: INFO: Pod "pod-configmaps-531154bf-2834-439f-b8e0-229ce4a61b22" satisfied condition "Succeeded or Failed" Apr 16 13:41:24.179: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x pod pod-configmaps-531154bf-2834-439f-b8e0-229ce4a61b22 container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:41:24.197: INFO: Waiting for pod pod-configmaps-531154bf-2834-439f-b8e0-229ce4a61b22 to disappear Apr 16 13:41:24.200: INFO: Pod pod-configmaps-531154bf-2834-439f-b8e0-229ce4a61b22 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:24.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4838" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":616,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:17.685: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating all guestbook components Apr 16 13:41:17.738: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Apr 16 13:41:17.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 create -f -' Apr 16 13:41:18.055: INFO: stderr: "" Apr 16 13:41:18.056: INFO: stdout: "service/agnhost-replica created\n" Apr 16 13:41:18.056: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Apr 16 13:41:18.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 create -f -' Apr 16 13:41:18.298: INFO: stderr: "" Apr 16 13:41:18.298: INFO: stdout: "service/agnhost-primary created\n" Apr 16 13:41:18.298: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 16 13:41:18.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 create -f -' Apr 16 13:41:18.512: INFO: stderr: "" Apr 16 13:41:18.512: INFO: stdout: "service/frontend created\n" Apr 16 13:41:18.512: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.33 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 16 13:41:18.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 create -f -' Apr 16 13:41:18.777: INFO: stderr: "" Apr 16 13:41:18.777: INFO: stdout: "deployment.apps/frontend created\n" Apr 16 13:41:18.777: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.33 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 16 13:41:18.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 create -f -' Apr 16 13:41:19.024: INFO: stderr: "" Apr 16 13:41:19.024: INFO: stdout: "deployment.apps/agnhost-primary created\n" Apr 16 13:41:19.025: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.33 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 16 13:41:19.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 create -f -' Apr 16 13:41:19.289: INFO: stderr: "" Apr 16 13:41:19.289: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Apr 16 13:41:19.289: INFO: Waiting for all frontend pods to be Running. Apr 16 13:41:24.340: INFO: Waiting for frontend to serve content. Apr 16 13:41:24.350: INFO: Trying to add a new entry to the guestbook. Apr 16 13:41:24.364: INFO: Verifying that added entry can be retrieved. �[1mSTEP�[0m: using delete to clean up resources Apr 16 13:41:24.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 delete --grace-period=0 --force -f -' Apr 16 13:41:24.470: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 13:41:24.470: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 16 13:41:24.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 delete --grace-period=0 --force -f -' Apr 16 13:41:24.609: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 13:41:24.610: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 16 13:41:24.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 delete --grace-period=0 --force -f -' Apr 16 13:41:24.707: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 13:41:24.707: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 16 13:41:24.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 delete --grace-period=0 --force -f -' Apr 16 13:41:24.781: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 13:41:24.781: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 16 13:41:24.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 delete --grace-period=0 --force -f -' Apr 16 13:41:24.902: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 13:41:24.903: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Apr 16 13:41:24.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5181 delete --grace-period=0 --force -f -' Apr 16 13:41:25.044: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 13:41:25.045: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:25.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5181" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":11,"skipped":351,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:18.175: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 16 13:41:18.234: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:41:20.238: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 16 13:41:20.249: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:41:22.253: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 16 13:41:22.268: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 16 13:41:22.271: INFO: Pod pod-with-poststart-exec-hook still exists Apr 16 13:41:24.271: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 16 13:41:24.280: INFO: Pod pod-with-poststart-exec-hook still exists Apr 16 13:41:26.272: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 16 13:41:26.276: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:26.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5402" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":443,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:25.098: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:41:25.149: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfa805ef-a666-4469-bc3d-83a4cd4acdbf" in namespace "downward-api-4762" to be "Succeeded or Failed" Apr 16 13:41:25.159: INFO: Pod "downwardapi-volume-bfa805ef-a666-4469-bc3d-83a4cd4acdbf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.819007ms Apr 16 13:41:27.164: INFO: Pod "downwardapi-volume-bfa805ef-a666-4469-bc3d-83a4cd4acdbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013954772s �[1mSTEP�[0m: Saw pod success Apr 16 13:41:27.164: INFO: Pod "downwardapi-volume-bfa805ef-a666-4469-bc3d-83a4cd4acdbf" satisfied condition "Succeeded or Failed" Apr 16 13:41:27.167: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod downwardapi-volume-bfa805ef-a666-4469-bc3d-83a4cd4acdbf container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:41:27.193: INFO: Waiting for pod downwardapi-volume-bfa805ef-a666-4469-bc3d-83a4cd4acdbf to disappear Apr 16 13:41:27.196: INFO: Pod downwardapi-volume-bfa805ef-a666-4469-bc3d-83a4cd4acdbf no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:27.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4762" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":365,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:24.249: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:41:24.287: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:27.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3332" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":35,"skipped":642,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:26.342: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:41:26.395: INFO: The status of Pod pod-secrets-bfb3a94a-33a4-4781-a579-0acf15e5363f is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:41:28.400: INFO: The status of Pod pod-secrets-bfb3a94a-33a4-4781-a579-0acf15e5363f is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:41:30.399: INFO: The status of Pod pod-secrets-bfb3a94a-33a4-4781-a579-0acf15e5363f is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:30.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-7873" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":24,"skipped":485,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:27.221: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1095 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1095;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1095 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1095;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1095.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1095.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1095.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1095.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1095.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1095.svc;check="$$(dig +notcp +noall +answer +search 83.55.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.55.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.55.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.55.83_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1095 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1095;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1095 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1095;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1095.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1095.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1095.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1095.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1095.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1095.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1095.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1095.svc;check="$$(dig +notcp +noall +answer +search 83.55.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.55.83_udp@PTR;check="$$(dig +tcp +noall +answer +search 83.55.130.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.130.55.83_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 16 13:41:29.315: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.319: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.322: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.326: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.329: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.334: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.340: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.344: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.363: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.376: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.382: INFO: Unable to read jessie_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.386: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.390: INFO: Unable to read jessie_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.394: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.398: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.401: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:29.427: INFO: Lookups using dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1095 wheezy_tcp@dns-test-service.dns-1095 wheezy_udp@dns-test-service.dns-1095.svc wheezy_tcp@dns-test-service.dns-1095.svc wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1095 jessie_tcp@dns-test-service.dns-1095 jessie_udp@dns-test-service.dns-1095.svc jessie_tcp@dns-test-service.dns-1095.svc jessie_udp@_http._tcp.dns-test-service.dns-1095.svc jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc] Apr 16 13:41:34.432: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.436: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.444: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.447: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.450: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.453: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.456: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.471: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.473: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.477: INFO: Unable to read jessie_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.480: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.483: INFO: Unable to read jessie_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.487: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.491: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.494: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:34.506: INFO: Lookups using dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1095 wheezy_tcp@dns-test-service.dns-1095 wheezy_udp@dns-test-service.dns-1095.svc wheezy_tcp@dns-test-service.dns-1095.svc wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1095 jessie_tcp@dns-test-service.dns-1095 jessie_udp@dns-test-service.dns-1095.svc jessie_tcp@dns-test-service.dns-1095.svc jessie_udp@_http._tcp.dns-test-service.dns-1095.svc jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc] Apr 16 13:41:39.432: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.435: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.438: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.442: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.446: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.449: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.452: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.456: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.472: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.475: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.478: INFO: Unable to read jessie_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.481: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.484: INFO: Unable to read jessie_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.486: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.489: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.492: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:39.503: INFO: Lookups using dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1095 wheezy_tcp@dns-test-service.dns-1095 wheezy_udp@dns-test-service.dns-1095.svc wheezy_tcp@dns-test-service.dns-1095.svc wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1095 jessie_tcp@dns-test-service.dns-1095 jessie_udp@dns-test-service.dns-1095.svc jessie_tcp@dns-test-service.dns-1095.svc jessie_udp@_http._tcp.dns-test-service.dns-1095.svc jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc] Apr 16 13:41:44.432: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.435: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.438: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.441: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.451: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.454: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.469: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.472: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.475: INFO: Unable to read jessie_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.478: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.480: INFO: Unable to read jessie_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.484: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.487: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.490: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:44.501: INFO: Lookups using dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1095 wheezy_tcp@dns-test-service.dns-1095 wheezy_udp@dns-test-service.dns-1095.svc wheezy_tcp@dns-test-service.dns-1095.svc wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1095 jessie_tcp@dns-test-service.dns-1095 jessie_udp@dns-test-service.dns-1095.svc jessie_tcp@dns-test-service.dns-1095.svc jessie_udp@_http._tcp.dns-test-service.dns-1095.svc jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc] Apr 16 13:41:49.433: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.436: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.439: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.443: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.446: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.449: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.452: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.455: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.469: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.472: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.474: INFO: Unable to read jessie_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.477: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.480: INFO: Unable to read jessie_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.483: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.486: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.490: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:49.501: INFO: Lookups using dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1095 wheezy_tcp@dns-test-service.dns-1095 wheezy_udp@dns-test-service.dns-1095.svc wheezy_tcp@dns-test-service.dns-1095.svc wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1095 jessie_tcp@dns-test-service.dns-1095 jessie_udp@dns-test-service.dns-1095.svc jessie_tcp@dns-test-service.dns-1095.svc jessie_udp@_http._tcp.dns-test-service.dns-1095.svc jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc] Apr 16 13:41:54.432: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.435: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.438: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.441: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.444: INFO: Unable to read wheezy_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.447: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.450: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.454: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.468: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.471: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.473: INFO: Unable to read jessie_udp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.476: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095 from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.479: INFO: Unable to read jessie_udp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.481: INFO: Unable to read jessie_tcp@dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.484: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.487: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc from pod dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f: the server could not find the requested resource (get pods dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f) Apr 16 13:41:54.498: INFO: Lookups using dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1095 wheezy_tcp@dns-test-service.dns-1095 wheezy_udp@dns-test-service.dns-1095.svc wheezy_tcp@dns-test-service.dns-1095.svc wheezy_udp@_http._tcp.dns-test-service.dns-1095.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1095.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1095 jessie_tcp@dns-test-service.dns-1095 jessie_udp@dns-test-service.dns-1095.svc jessie_tcp@dns-test-service.dns-1095.svc jessie_udp@_http._tcp.dns-test-service.dns-1095.svc jessie_tcp@_http._tcp.dns-test-service.dns-1095.svc] Apr 16 13:41:59.502: INFO: DNS probes using dns-1095/dns-test-35c3c000-63b6-4da3-bd0f-b5309fc7385f succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:41:59.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-1095" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":13,"skipped":374,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:59.726: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 16 13:42:00.229: INFO: Waiting up to 5m0s for pod "downward-api-bd332af8-60c8-4da3-a55c-d8e0619e347a" in namespace "downward-api-9358" to be "Succeeded or Failed" Apr 16 13:42:00.232: INFO: Pod "downward-api-bd332af8-60c8-4da3-a55c-d8e0619e347a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.916162ms Apr 16 13:42:02.237: INFO: Pod "downward-api-bd332af8-60c8-4da3-a55c-d8e0619e347a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007614653s �[1mSTEP�[0m: Saw pod success Apr 16 13:42:02.237: INFO: Pod "downward-api-bd332af8-60c8-4da3-a55c-d8e0619e347a" satisfied condition "Succeeded or Failed" Apr 16 13:42:02.241: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod downward-api-bd332af8-60c8-4da3-a55c-d8e0619e347a container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:42:02.260: INFO: Waiting for pod downward-api-bd332af8-60c8-4da3-a55c-d8e0619e347a to disappear Apr 16 13:42:02.262: INFO: Pod downward-api-bd332af8-60c8-4da3-a55c-d8e0619e347a no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:02.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9358" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":422,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:02.309: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:42:02.661: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Apr 16 13:42:04.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 42, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 42, 2, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 42, 2, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 42, 2, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:42:07.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Setting timeout (1s) shorter than webhook latency (5s) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) �[1mSTEP�[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is longer than webhook latency �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is empty (defaulted to 10s in v1) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:19.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2699" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2699-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":15,"skipped":447,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:19.987: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name projected-secret-test-27e7a456-fd47-4ed4-9919-415df2d2af7d �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:42:20.049: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d1836b5f-9df3-4e4b-9302-823af17aaa9d" in namespace "projected-8755" to be "Succeeded or Failed" Apr 16 13:42:20.053: INFO: Pod "pod-projected-secrets-d1836b5f-9df3-4e4b-9302-823af17aaa9d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.6262ms Apr 16 13:42:22.058: INFO: Pod "pod-projected-secrets-d1836b5f-9df3-4e4b-9302-823af17aaa9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008164331s �[1mSTEP�[0m: Saw pod success Apr 16 13:42:22.058: INFO: Pod "pod-projected-secrets-d1836b5f-9df3-4e4b-9302-823af17aaa9d" satisfied condition "Succeeded or Failed" Apr 16 13:42:22.061: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-projected-secrets-d1836b5f-9df3-4e4b-9302-823af17aaa9d container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:42:22.080: INFO: Waiting for pod pod-projected-secrets-d1836b5f-9df3-4e4b-9302-823af17aaa9d to disappear Apr 16 13:42:22.084: INFO: Pod pod-projected-secrets-d1836b5f-9df3-4e4b-9302-823af17aaa9d no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:22.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8755" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":493,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:37:26.554: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1301.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1301.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1301.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1301.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 96.122.139.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.139.122.96_udp@PTR;check="$$(dig +tcp +noall +answer +search 96.122.139.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.139.122.96_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1301.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1301.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1301.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1301.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1301.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1301.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 96.122.139.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.139.122.96_udp@PTR;check="$$(dig +tcp +noall +answer +search 96.122.139.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.139.122.96_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 16 13:41:08.492: INFO: Unable to read wheezy_udp@dns-test-service.dns-1301.svc.cluster.local from pod dns-1301/dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e: the server is currently unable to handle the request (get pods dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e) Apr 16 13:42:34.640: FAIL: Unable to read wheezy_tcp@dns-test-service.dns-1301.svc.cluster.local from pod dns-1301/dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1301/pods/dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e/proxy/results/wheezy_tcp@dns-test-service.dns-1301.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f7af02bdd60, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x77ba0a8, 0xc000138000}, 0xc003c6d9f8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x77ba0a8, 0xc000138000}, 0x38, 0x2bb9f85, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x77ba0a8, 0xc000138000}, 0x4a, 0xc003c6da88, 0x2378d47) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x76a2200, 0xc00016e800, 0xc003c6dad0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0044b3680, 0x10, 0x18}, {0x6ecd939, 0x7}, 0xc004c34c00, {0x78eb710, 0xc004b9c600}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000bde2c0, 0xc004c34c00, {0xc0044b3680, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005724e0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E0416 13:42:34.640758 16 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Apr 16 13:42:34.640: Unable to read wheezy_tcp@dns-test-service.dns-1301.svc.cluster.local from pod dns-1301/dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-1301/pods/dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e/proxy/results/wheezy_tcp@dns-test-service.dns-1301.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:220, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f7af02bdd60, 0x0})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x77ba0a8, 0xc000138000}, 0xc003c6d9f8)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x77ba0a8, 0xc000138000}, 0x38, 0x2bb9f85, 0x68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x77ba0a8, 0xc000138000}, 0x4a, 0xc003c6da88, 0x2378d47)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x76a2200, 0xc00016e800, 0xc003c6dad0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0044b3680, 0x10, 0x18}, {0x6ecd939, 0x7}, 0xc004c34c00, {0x78eb710, 0xc004b9c600}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000bde2c0, 0xc004c34c00, {0xc0044b3680, 0x10, 0x18})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470\nk8s.io/kubernetes/test/e2e/network.glob..func2.5()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc45\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x2371919)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc0005724e0, 0x71566f0)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 111 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6a38820, 0xc004d2a180}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc000118280}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6a38820, 0xc004d2a180}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x610baa0, 0x76987f0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc00043bb00, 0x167}, {0xc003c6d490, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc00043bb00, 0x167}, {0xc003c6d570, 0x6ec4cca, 0xc003c6d598}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Failf({0x6f7531e, 0x2d}, {0xc003c6d7e0, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x131 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x889 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f7af02bdd60, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x77ba0a8, 0xc000138000}, 0xc003c6d9f8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x77ba0a8, 0xc000138000}, 0x38, 0x2bb9f85, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x77ba0a8, 0xc000138000}, 0x4a, 0xc003c6da88, 0x2378d47) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x76a2200, 0xc00016e800, 0xc003c6dad0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc0044b3680, 0x10, 0x18}, {0x6ecd939, 0x7}, 0xc004c34c00, {0x78eb710, 0xc004b9c600}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc000bde2c0, 0xc004c34c00, {0xc0044b3680, 0x10, 0x18}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:182 +0xc45 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000102b60) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc003c6f5c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc001447c20, 0xc003c6f990, {0x76a2200, 0xc00016e800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc001447c20, {0x76a2200, 0xc00016e800}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc003ed5180, 0xc001447c20) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc003ed5180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc003ed5180) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00017e070, {0x7f7af0506670, 0xc0005724e0}, {0x6f04445, 0x40}, {0xc00022f560, 0x3, 0x3}, {0x7811bb8, 0xc00016e800}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x76a8840, 0xc0005724e0}, {0x6f04445, 0x14}, {0xc00042d580, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x76a8840, 0xc0005724e0}, {0x6f04445, 0x14}, {0xc0002327a0, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005724e0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:34.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-1301" for this suite. �[91m�[1m• Failure [308.155 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for services [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 16 13:42:34.640: Unable to read wheezy_tcp@dns-test-service.dns-1301.svc.cluster.local from pod dns-1301/dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-1301/pods/dns-test-99cd2f2d-4db3-46e4-9f30-73286f558e4e/proxy/results/wheezy_tcp@dns-test-service.dns-1301.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:22.101: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-dqf8 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 16 13:42:22.147: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dqf8" in namespace "subpath-106" to be "Succeeded or Failed" Apr 16 13:42:22.150: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.982504ms Apr 16 13:42:24.154: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 2.006979508s Apr 16 13:42:26.158: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 4.010660731s Apr 16 13:42:28.163: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 6.015381808s Apr 16 13:42:30.167: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 8.019463769s Apr 16 13:42:32.171: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 10.023281421s Apr 16 13:42:34.175: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 12.027485479s Apr 16 13:42:36.179: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 14.031807056s Apr 16 13:42:38.183: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 16.035255037s Apr 16 13:42:40.188: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 18.040403661s Apr 16 13:42:42.192: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Running", Reason="", readiness=true. Elapsed: 20.044628599s Apr 16 13:42:44.196: INFO: Pod "pod-subpath-test-configmap-dqf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.04861594s �[1mSTEP�[0m: Saw pod success Apr 16 13:42:44.196: INFO: Pod "pod-subpath-test-configmap-dqf8" satisfied condition "Succeeded or Failed" Apr 16 13:42:44.199: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-subpath-test-configmap-dqf8 container test-container-subpath-configmap-dqf8: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:42:44.213: INFO: Waiting for pod pod-subpath-test-configmap-dqf8 to disappear Apr 16 13:42:44.216: INFO: Pod pod-subpath-test-configmap-dqf8 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-dqf8 Apr 16 13:42:44.216: INFO: Deleting pod "pod-subpath-test-configmap-dqf8" in namespace "subpath-106" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:44.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-106" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":17,"skipped":497,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:44.236: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-688f0fbf-fe5d-45ef-901c-9db073521838 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:42:44.275: INFO: Waiting up to 5m0s for pod "pod-configmaps-6763a5e4-9dc8-4d9f-8332-a149c05d9297" in namespace "configmap-9358" to be "Succeeded or Failed" Apr 16 13:42:44.278: INFO: Pod "pod-configmaps-6763a5e4-9dc8-4d9f-8332-a149c05d9297": Phase="Pending", Reason="", readiness=false. Elapsed: 3.261308ms Apr 16 13:42:46.282: INFO: Pod "pod-configmaps-6763a5e4-9dc8-4d9f-8332-a149c05d9297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006954998s �[1mSTEP�[0m: Saw pod success Apr 16 13:42:46.282: INFO: Pod "pod-configmaps-6763a5e4-9dc8-4d9f-8332-a149c05d9297" satisfied condition "Succeeded or Failed" Apr 16 13:42:46.285: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-e11j1x pod pod-configmaps-6763a5e4-9dc8-4d9f-8332-a149c05d9297 container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:42:46.309: INFO: Waiting for pod pod-configmaps-6763a5e4-9dc8-4d9f-8332-a149c05d9297 to disappear Apr 16 13:42:46.312: INFO: Pod pod-configmaps-6763a5e4-9dc8-4d9f-8332-a149c05d9297 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:46.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9358" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":504,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:46.361: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-7697 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-7697 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-7697 I0416 13:42:46.411346 18 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-7697, replica count: 3 I0416 13:42:49.462949 18 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 13:42:49.469: INFO: Creating new exec pod Apr 16 13:42:52.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7697 exec execpod-affinityxf4vz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Apr 16 13:42:52.631: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Apr 16 13:42:52.631: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:42:52.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7697 exec execpod-affinityxf4vz -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.138.254.69 80' Apr 16 13:42:52.782: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.138.254.69 80\nConnection to 10.138.254.69 80 port [tcp/http] succeeded!\n" Apr 16 13:42:52.782: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:42:52.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7697 exec execpod-affinityxf4vz -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.138.254.69:80/ ; done' Apr 16 13:42:53.030: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.138.254.69:80/\n" Apr 16 13:42:53.030: INFO: stdout: "\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4\naffinity-clusterip-xhst4" Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Received response from host: affinity-clusterip-xhst4 Apr 16 13:42:53.030: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-7697, will wait for the garbage collector to delete the pods Apr 16 13:42:53.096: INFO: Deleting ReplicationController affinity-clusterip took: 4.772253ms Apr 16 13:42:53.197: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.678869ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:55.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7697" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":535,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:55.290: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Apr 16 13:42:57.840: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8702 pod-service-account-a2c8e958-ee9e-4889-8672-901d8031c00e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Apr 16 13:42:57.992: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8702 pod-service-account-a2c8e958-ee9e-4889-8672-901d8031c00e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Apr 16 13:42:58.149: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8702 pod-service-account-a2c8e958-ee9e-4889-8672-901d8031c00e -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:42:58.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-8702" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":20,"skipped":569,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":8,"skipped":174,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:34.713: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6019.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6019.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6019.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6019.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 220.132.129.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.129.132.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.132.129.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.129.132.220_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6019.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6019.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6019.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6019.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6019.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 220.132.129.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.129.132.220_udp@PTR;check="$$(dig +tcp +noall +answer +search 220.132.129.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.129.132.220_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 16 13:42:36.820: INFO: Unable to read wheezy_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.826: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.829: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.848: INFO: Unable to read jessie_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.854: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.857: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:36.870: INFO: Lookups using dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8 failed for: [wheezy_udp@dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_udp@dns-test-service.dns-6019.svc.cluster.local jessie_tcp@dns-test-service.dns-6019.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local] Apr 16 13:42:41.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.878: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.881: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.883: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.900: INFO: Unable to read jessie_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.903: INFO: Unable to read jessie_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.906: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.909: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:41.920: INFO: Lookups using dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8 failed for: [wheezy_udp@dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_udp@dns-test-service.dns-6019.svc.cluster.local jessie_tcp@dns-test-service.dns-6019.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local] Apr 16 13:42:46.876: INFO: Unable to read wheezy_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.880: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.888: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.891: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.911: INFO: Unable to read jessie_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.915: INFO: Unable to read jessie_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.918: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.921: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:46.934: INFO: Lookups using dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8 failed for: [wheezy_udp@dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_udp@dns-test-service.dns-6019.svc.cluster.local jessie_tcp@dns-test-service.dns-6019.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local] Apr 16 13:42:51.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.882: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.885: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.902: INFO: Unable to read jessie_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.905: INFO: Unable to read jessie_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.908: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.912: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:51.925: INFO: Lookups using dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8 failed for: [wheezy_udp@dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_udp@dns-test-service.dns-6019.svc.cluster.local jessie_tcp@dns-test-service.dns-6019.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local] Apr 16 13:42:56.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.882: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.886: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.902: INFO: Unable to read jessie_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.906: INFO: Unable to read jessie_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.910: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.914: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:42:56.927: INFO: Lookups using dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8 failed for: [wheezy_udp@dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_udp@dns-test-service.dns-6019.svc.cluster.local jessie_tcp@dns-test-service.dns-6019.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local] Apr 16 13:43:01.875: INFO: Unable to read wheezy_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.879: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.882: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.885: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.902: INFO: Unable to read jessie_udp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.905: INFO: Unable to read jessie_tcp@dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.908: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.912: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local from pod dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8: the server could not find the requested resource (get pods dns-test-508fd154-7807-4d9d-9504-325ccd3741c8) Apr 16 13:43:01.925: INFO: Lookups using dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8 failed for: [wheezy_udp@dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@dns-test-service.dns-6019.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_udp@dns-test-service.dns-6019.svc.cluster.local jessie_tcp@dns-test-service.dns-6019.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6019.svc.cluster.local] Apr 16 13:43:06.925: INFO: DNS probes using dns-6019/dns-test-508fd154-7807-4d9d-9504-325ccd3741c8 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:07.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6019" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":9,"skipped":174,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:07.100: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Apr 16 13:43:07.152: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:09.156: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.7 on the node which pod1 resides and expect scheduled Apr 16 13:43:09.164: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:11.169: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.7 but use UDP protocol on the node which pod2 resides Apr 16 13:43:11.179: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:13.183: INFO: The status of Pod pod3 is Running (Ready = true) Apr 16 13:43:13.190: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:15.194: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Apr 16 13:43:15.198: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.7 http://127.0.0.1:54323/hostname] Namespace:hostport-1095 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:43:15.198: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:43:15.199: INFO: ExecWithOptions: Clientset creation Apr 16 13:43:15.199: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-1095/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.7+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54323 Apr 16 13:43:15.278: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.7:54323/hostname] Namespace:hostport-1095 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:43:15.278: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:43:15.278: INFO: ExecWithOptions: Clientset creation Apr 16 13:43:15.279: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-1095/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.7%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54323 UDP Apr 16 13:43:15.333: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.7 54323] Namespace:hostport-1095 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:43:15.333: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:43:15.334: INFO: ExecWithOptions: Clientset creation Apr 16 13:43:15.334: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-1095/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.7+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:20.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-1095" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":224,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:20.424: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve multiport endpoints from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service multi-endpoint-test in namespace services-128 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-128 to expose endpoints map[] Apr 16 13:43:20.486: INFO: successfully validated that service multi-endpoint-test in namespace services-128 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-128 Apr 16 13:43:20.503: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:22.508: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-128 to expose endpoints map[pod1:[100]] Apr 16 13:43:22.522: INFO: successfully validated that service multi-endpoint-test in namespace services-128 exposes endpoints map[pod1:[100]] �[1mSTEP�[0m: Creating pod pod2 in namespace services-128 Apr 16 13:43:22.529: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:24.534: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-128 to expose endpoints map[pod1:[100] pod2:[101]] Apr 16 13:43:24.547: INFO: successfully validated that service multi-endpoint-test in namespace services-128 exposes endpoints map[pod1:[100] pod2:[101]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pods Apr 16 13:43:24.547: INFO: Creating new exec pod Apr 16 13:43:27.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-128 exec execpodmlv2s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 80' Apr 16 13:43:27.705: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 80\nConnection to multi-endpoint-test 80 port [tcp/http] succeeded!\n" Apr 16 13:43:27.705: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:43:27.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-128 exec execpodmlv2s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.137.29.176 80' Apr 16 13:43:27.854: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.137.29.176 80\nConnection to 10.137.29.176 80 port [tcp/http] succeeded!\n" Apr 16 13:43:27.854: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:43:27.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-128 exec execpodmlv2s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 multi-endpoint-test 81' Apr 16 13:43:27.986: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 multi-endpoint-test 81\nConnection to multi-endpoint-test 81 port [tcp/*] succeeded!\n" Apr 16 13:43:27.986: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:43:27.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-128 exec execpodmlv2s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.137.29.176 81' Apr 16 13:43:28.146: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.137.29.176 81\nConnection to 10.137.29.176 81 port [tcp/*] succeeded!\n" Apr 16 13:43:28.146: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-128 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-128 to expose endpoints map[pod2:[101]] Apr 16 13:43:28.177: INFO: successfully validated that service multi-endpoint-test in namespace services-128 exposes endpoints map[pod2:[101]] �[1mSTEP�[0m: Deleting pod pod2 in namespace services-128 �[1mSTEP�[0m: waiting up to 3m0s for service multi-endpoint-test in namespace services-128 to expose endpoints map[] Apr 16 13:43:29.235: INFO: successfully validated that service multi-endpoint-test in namespace services-128 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:29.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-128" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":-1,"completed":11,"skipped":224,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:29.275: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating Agnhost RC Apr 16 13:43:29.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8686 create -f -' Apr 16 13:43:30.146: INFO: stderr: "" Apr 16 13:43:30.146: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 16 13:43:31.151: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:43:31.151: INFO: Found 0 / 1 Apr 16 13:43:32.151: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:43:32.151: INFO: Found 1 / 1 Apr 16 13:43:32.151: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Apr 16 13:43:32.154: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:43:32.154: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 16 13:43:32.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8686 patch pod agnhost-primary-46p7q -p {"metadata":{"annotations":{"x":"y"}}}' Apr 16 13:43:32.245: INFO: stderr: "" Apr 16 13:43:32.245: INFO: stdout: "pod/agnhost-primary-46p7q patched\n" �[1mSTEP�[0m: checking annotations Apr 16 13:43:32.249: INFO: Selector matched 1 pods for map[app:agnhost] Apr 16 13:43:32.249: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:32.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8686" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":12,"skipped":230,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:42:58.311: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-3766, will wait for the garbage collector to delete the pods Apr 16 13:43:00.410: INFO: Deleting Job.batch foo took: 4.649062ms Apr 16 13:43:00.510: INFO: Terminating Job.batch foo pods took: 100.181306ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:32.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-3766" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":21,"skipped":575,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:32.544: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating Pod �[1mSTEP�[0m: Reading file content from the nginx-container Apr 16 13:43:34.586: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-4056 PodName:pod-sharedvolume-ec1e4287-1d04-45cb-824c-3238287e85a0 ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:43:34.587: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:43:34.588: INFO: ExecWithOptions: Clientset creation Apr 16 13:43:34.588: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/emptydir-4056/pods/pod-sharedvolume-ec1e4287-1d04-45cb-824c-3238287e85a0/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:43:34.664: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:34.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4056" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":22,"skipped":589,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:32.305: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 16 13:43:32.350: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:34.354: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 16 13:43:34.365: INFO: The status of Pod pod-with-prestop-exec-hook is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:43:36.369: INFO: The status of Pod pod-with-prestop-exec-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 16 13:43:36.376: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 16 13:43:36.378: INFO: Pod pod-with-prestop-exec-hook still exists Apr 16 13:43:38.379: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 16 13:43:38.382: INFO: Pod pod-with-prestop-exec-hook still exists Apr 16 13:43:40.380: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 16 13:43:40.384: INFO: Pod pod-with-prestop-exec-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:40.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-2187" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":262,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:34.712: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-secret-xlhl �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 16 13:43:34.762: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xlhl" in namespace "subpath-4377" to be "Succeeded or Failed" Apr 16 13:43:34.765: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Pending", Reason="", readiness=false. Elapsed: 3.110418ms Apr 16 13:43:36.769: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 2.007567617s Apr 16 13:43:38.773: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 4.011299436s Apr 16 13:43:40.777: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 6.015643499s Apr 16 13:43:42.781: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 8.019217612s Apr 16 13:43:44.784: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 10.022430063s Apr 16 13:43:46.788: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 12.02659745s Apr 16 13:43:48.793: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 14.031031699s Apr 16 13:43:50.796: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 16.034570153s Apr 16 13:43:52.801: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 18.039092857s Apr 16 13:43:54.806: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Running", Reason="", readiness=true. Elapsed: 20.044384737s Apr 16 13:43:56.814: INFO: Pod "pod-subpath-test-secret-xlhl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.051952414s �[1mSTEP�[0m: Saw pod success Apr 16 13:43:56.814: INFO: Pod "pod-subpath-test-secret-xlhl" satisfied condition "Succeeded or Failed" Apr 16 13:43:56.818: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-subpath-test-secret-xlhl container test-container-subpath-secret-xlhl: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:43:56.850: INFO: Waiting for pod pod-subpath-test-secret-xlhl to disappear Apr 16 13:43:56.853: INFO: Pod pod-subpath-test-secret-xlhl no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-secret-xlhl Apr 16 13:43:56.853: INFO: Deleting pod "pod-subpath-test-secret-xlhl" in namespace "subpath-4377" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:56.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4377" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":23,"skipped":617,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:56.873: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 16 13:43:56.912: INFO: Waiting up to 5m0s for pod "pod-7d9a8626-8461-4af7-bea7-2022d70c4806" in namespace "emptydir-2333" to be "Succeeded or Failed" Apr 16 13:43:56.914: INFO: Pod "pod-7d9a8626-8461-4af7-bea7-2022d70c4806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.664739ms Apr 16 13:43:58.918: INFO: Pod "pod-7d9a8626-8461-4af7-bea7-2022d70c4806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006602081s �[1mSTEP�[0m: Saw pod success Apr 16 13:43:58.918: INFO: Pod "pod-7d9a8626-8461-4af7-bea7-2022d70c4806" satisfied condition "Succeeded or Failed" Apr 16 13:43:58.922: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-7d9a8626-8461-4af7-bea7-2022d70c4806 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:43:58.937: INFO: Waiting for pod pod-7d9a8626-8461-4af7-bea7-2022d70c4806 to disappear Apr 16 13:43:58.940: INFO: Pod pod-7d9a8626-8461-4af7-bea7-2022d70c4806 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:43:58.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2333" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":622,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:58.974: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:43:59.019: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 16 13:44:04.023: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 16 13:44:04.024: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 16 13:44:06.028: INFO: Creating deployment "test-rollover-deployment" Apr 16 13:44:06.035: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 16 13:44:08.042: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 16 13:44:08.049: INFO: Ensure that both replica sets have 1 created replica Apr 16 13:44:08.055: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 16 13:44:08.063: INFO: Updating deployment test-rollover-deployment Apr 16 13:44:08.064: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 16 13:44:10.070: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 16 13:44:10.077: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 16 13:44:10.083: INFO: all replica sets need to contain the pod-template-hash label Apr 16 13:44:10.083: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:44:12.090: INFO: all replica sets need to contain the pod-template-hash label Apr 16 13:44:12.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:44:14.092: INFO: all replica sets need to contain the pod-template-hash label Apr 16 13:44:14.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:44:16.093: INFO: all replica sets need to contain the pod-template-hash label Apr 16 13:44:16.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:44:18.091: INFO: all replica sets need to contain the pod-template-hash label Apr 16 13:44:18.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 44, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 44, 6, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:44:20.092: INFO: Apr 16 13:44:20.092: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 16 13:44:20.100: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-7711 e069523d-1a7a-44bc-884d-fc6832af8306 10642 2 2022-04-16 13:44:06 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-16 13:44:08 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bf25b8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-16 13:44:06 +0000 UTC,LastTransitionTime:2022-04-16 13:44:06 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668b7f667d" has successfully progressed.,LastUpdateTime:2022-04-16 13:44:19 +0000 UTC,LastTransitionTime:2022-04-16 13:44:06 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 16 13:44:20.104: INFO: New ReplicaSet "test-rollover-deployment-668b7f667d" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668b7f667d deployment-7711 71fb9b20-0e33-4fc8-ac46-f3c595a02c54 10632 2 2022-04-16 13:44:08 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment e069523d-1a7a-44bc-884d-fc6832af8306 0xc004bf2ab7 0xc004bf2ab8}] [] [{kube-controller-manager Update apps/v1 2022-04-16 13:44:08 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e069523d-1a7a-44bc-884d-fc6832af8306\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:44:19 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668b7f667d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bf2b68 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:44:20.104: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 16 13:44:20.104: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-7711 8b947a24-7337-45be-9882-ce6f18980a5b 10641 2 2022-04-16 13:43:59 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment e069523d-1a7a-44bc-884d-fc6832af8306 0xc004bf2987 0xc004bf2988}] [] [{e2e.test Update apps/v1 2022-04-16 13:43:59 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:44:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e069523d-1a7a-44bc-884d-fc6832af8306\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:44:19 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004bf2a48 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:44:20.104: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-7711 552fcdcc-f440-40e8-9876-5133cdc7df39 10594 2 2022-04-16 13:44:06 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment e069523d-1a7a-44bc-884d-fc6832af8306 0xc004bf2bd7 0xc004bf2bd8}] [] [{kube-controller-manager Update apps/v1 2022-04-16 13:44:06 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"e069523d-1a7a-44bc-884d-fc6832af8306\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-16 13:44:08 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004bf2c88 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 16 13:44:20.108: INFO: Pod "test-rollover-deployment-668b7f667d-csd9k" is available: &Pod{ObjectMeta:{test-rollover-deployment-668b7f667d-csd9k test-rollover-deployment-668b7f667d- deployment-7711 609635c0-3cd8-4ba5-aab8-158500e82960 10608 0 2022-04-16 13:44:08 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668b7f667d 71fb9b20-0e33-4fc8-ac46-f3c595a02c54 0xc0021d7db7 0xc0021d7db8}] [] [{kube-controller-manager Update v1 2022-04-16 13:44:08 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71fb9b20-0e33-4fc8-ac46-f3c595a02c54\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {Go-http-client Update v1 2022-04-16 13:44:09 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.68\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kpj6t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kpj6t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-3a12zq-worker-jbucf3,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:44:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:44:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:44:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-16 13:44:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.68,StartTime:2022-04-16 13:44:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-16 13:44:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://d67734b3eb5b2fa204bc53707048a0ff5b92f0b627ba04681f4013df998747eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:20.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-7711" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":25,"skipped":637,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:20.132: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Apr 16 13:44:20.165: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Apr 16 13:44:20.165: INFO: cleanMinorVersion: 23 Apr 16 13:44:20.165: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:20.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-1727" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":26,"skipped":646,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:20.186: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:44:20.213: INFO: Got root ca configmap in namespace "svcaccounts-2533" Apr 16 13:44:20.217: INFO: Deleted root ca configmap in namespace "svcaccounts-2533" �[1mSTEP�[0m: waiting for a new root ca configmap created Apr 16 13:44:20.721: INFO: Recreated root ca configmap in namespace "svcaccounts-2533" Apr 16 13:44:20.724: INFO: Updated root ca configmap in namespace "svcaccounts-2533" �[1mSTEP�[0m: waiting for the root ca configmap reconciled Apr 16 13:44:21.230: INFO: Reconciled root ca configmap in namespace "svcaccounts-2533" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:21.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-2533" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":27,"skipped":655,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:21.335: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of ReplicaSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create a ReplicaSet �[1mSTEP�[0m: Verify that the required pods have come up Apr 16 13:44:21.381: INFO: Pod name sample-pod: Found 0 pods out of 3 Apr 16 13:44:26.386: INFO: Pod name sample-pod: Found 3 pods out of 3 �[1mSTEP�[0m: ensuring each pod is running Apr 16 13:44:26.389: INFO: Replica Status: {Replicas:3 FullyLabeledReplicas:3 ReadyReplicas:3 AvailableReplicas:3 ObservedGeneration:1 Conditions:[]} �[1mSTEP�[0m: Listing all ReplicaSets �[1mSTEP�[0m: DeleteCollection of the ReplicaSets �[1mSTEP�[0m: After DeleteCollection verify that ReplicaSets have been deleted [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:26.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-9883" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should list and delete a collection of ReplicaSets [Conformance]","total":-1,"completed":28,"skipped":716,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:26.430: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:37.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3822" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":29,"skipped":717,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:43:40.410: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-9520 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-9520 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-9520 Apr 16 13:43:40.450: INFO: Found 0 stateful pods, waiting for 1 Apr 16 13:43:50.455: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 16 13:43:50.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9520 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 13:43:50.602: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 16 13:43:50.602: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 13:43:50.602: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 13:43:50.606: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 16 13:44:00.611: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 16 13:44:00.611: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 13:44:00.625: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 13:44:00.625: INFO: ss-0 k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:40 +0000 UTC }] Apr 16 13:44:00.625: INFO: Apr 16 13:44:00.625: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 16 13:44:01.630: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995975091s Apr 16 13:44:02.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990676195s Apr 16 13:44:03.640: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986062584s Apr 16 13:44:04.645: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98076456s Apr 16 13:44:05.650: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.975222323s Apr 16 13:44:06.655: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970847686s Apr 16 13:44:07.660: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.965242085s Apr 16 13:44:08.667: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.959629813s Apr 16 13:44:09.672: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.299012ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9520 Apr 16 13:44:10.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9520 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 13:44:10.824: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 16 13:44:10.824: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 13:44:10.824: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 13:44:10.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9520 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 13:44:10.986: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 16 13:44:10.986: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 13:44:10.986: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 13:44:10.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9520 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 16 13:44:11.141: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 16 13:44:11.142: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 16 13:44:11.142: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 16 13:44:11.146: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 16 13:44:21.152: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:44:21.152: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 16 13:44:21.152: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Apr 16 13:44:21.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9520 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 13:44:21.312: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 16 13:44:21.312: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 13:44:21.312: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 13:44:21.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9520 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 13:44:21.479: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 16 13:44:21.479: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 13:44:21.479: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 13:44:21.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-9520 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 16 13:44:21.647: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 16 13:44:21.647: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 16 13:44:21.647: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 16 13:44:21.647: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 13:44:21.654: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 16 13:44:31.662: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 16 13:44:31.662: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 16 13:44:31.662: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 16 13:44:31.676: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 13:44:31.676: INFO: ss-0 k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:40 +0000 UTC }] Apr 16 13:44:31.676: INFO: ss-1 k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:00 +0000 UTC }] Apr 16 13:44:31.676: INFO: ss-2 k8s-upgrade-and-conformance-3a12zq-worker-e11j1x Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:00 +0000 UTC }] Apr 16 13:44:31.676: INFO: Apr 16 13:44:31.676: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 16 13:44:32.680: INFO: POD NODE PHASE GRACE CONDITIONS Apr 16 13:44:32.680: INFO: ss-0 k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:44:21 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-16 13:43:40 +0000 UTC }] Apr 16 13:44:32.680: INFO: Apr 16 13:44:32.680: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 16 13:44:33.685: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.990903125s Apr 16 13:44:34.690: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.986075922s Apr 16 13:44:35.696: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.982234556s Apr 16 13:44:36.701: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.975781193s Apr 16 13:44:37.708: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.970621076s Apr 16 13:44:38.711: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.964664647s Apr 16 13:44:39.716: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.961054236s Apr 16 13:44:40.720: INFO: Verifying statefulset ss doesn't scale past 0 for another 955.488066ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9520 Apr 16 13:44:41.724: INFO: Scaling statefulset ss to 0 Apr 16 13:44:41.735: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 16 13:44:41.741: INFO: Deleting all statefulset in ns statefulset-9520 Apr 16 13:44:41.744: INFO: Scaling statefulset ss to 0 Apr 16 13:44:41.752: INFO: Waiting for statefulset status.replicas updated to 0 Apr 16 13:44:41.754: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:41.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-9520" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":14,"skipped":269,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:41.860: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:44:41.899: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5847a2ff-ee7c-43af-a866-0e6a4720e505" in namespace "downward-api-8719" to be "Succeeded or Failed" Apr 16 13:44:41.902: INFO: Pod "downwardapi-volume-5847a2ff-ee7c-43af-a866-0e6a4720e505": Phase="Pending", Reason="", readiness=false. Elapsed: 2.382345ms Apr 16 13:44:43.906: INFO: Pod "downwardapi-volume-5847a2ff-ee7c-43af-a866-0e6a4720e505": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006699923s �[1mSTEP�[0m: Saw pod success Apr 16 13:44:43.906: INFO: Pod "downwardapi-volume-5847a2ff-ee7c-43af-a866-0e6a4720e505" satisfied condition "Succeeded or Failed" Apr 16 13:44:43.909: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x pod downwardapi-volume-5847a2ff-ee7c-43af-a866-0e6a4720e505 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:44:43.928: INFO: Waiting for pod downwardapi-volume-5847a2ff-ee7c-43af-a866-0e6a4720e505 to disappear Apr 16 13:44:43.931: INFO: Pod downwardapi-volume-5847a2ff-ee7c-43af-a866-0e6a4720e505 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:43.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8719" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":328,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:43.965: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:44.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-795" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":16,"skipped":343,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:27.473: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 16 13:41:35.541: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:35.550: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:35.550: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:41:40.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:40.565: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:40.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:41:45.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:45.563: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:45.563: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:41:50.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:50.565: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:50.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:41:55.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:41:55.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:00.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:00.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:05.561: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:05.568: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:10.560: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:10.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:15.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:15.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:20.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:20.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:25.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:25.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:30.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:30.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:35.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:35.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:40.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:40.564: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:45.557: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:45.563: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:50.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:50.563: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:42:55.560: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:42:55.567: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:00.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:00.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:05.557: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:05.563: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:10.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:10.564: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:15.557: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:15.563: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:20.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:20.569: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:25.563: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:25.575: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:30.561: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:30.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:35.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:35.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:40.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:40.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:45.557: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:45.564: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:50.560: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:50.567: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:43:55.560: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:43:55.568: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:00.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:00.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:05.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:05.567: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:10.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:10.567: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:15.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:15.566: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:20.561: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:20.569: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:25.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:25.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:30.559: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:30.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:35.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:35.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:40.558: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45: the server could not find the requested resource (get pods dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45) Apr 16 13:44:40.565: INFO: Lookups using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 failed for: [wheezy_tcp@kubernetes.default.svc.cluster.local] Apr 16 13:44:45.566: INFO: DNS probes using dns-1401/dns-test-eba9b98a-c3be-4d27-9cb8-25af766d1b45 succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:45.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-1401" for this suite. �[32m• [SLOW TEST:198.119 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m should provide DNS for the cluster [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":36,"skipped":651,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:44.080: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 16 13:44:44.119: INFO: Waiting up to 5m0s for pod "downward-api-8a59b4f9-6dfe-4a4e-89f3-92182d72a4e7" in namespace "downward-api-710" to be "Succeeded or Failed" Apr 16 13:44:44.123: INFO: Pod "downward-api-8a59b4f9-6dfe-4a4e-89f3-92182d72a4e7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.113598ms Apr 16 13:44:46.127: INFO: Pod "downward-api-8a59b4f9-6dfe-4a4e-89f3-92182d72a4e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007330779s �[1mSTEP�[0m: Saw pod success Apr 16 13:44:46.127: INFO: Pod "downward-api-8a59b4f9-6dfe-4a4e-89f3-92182d72a4e7" satisfied condition "Succeeded or Failed" Apr 16 13:44:46.130: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-e11j1x pod downward-api-8a59b4f9-6dfe-4a4e-89f3-92182d72a4e7 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:44:46.153: INFO: Waiting for pod downward-api-8a59b4f9-6dfe-4a4e-89f3-92182d72a4e7 to disappear Apr 16 13:44:46.157: INFO: Pod downward-api-8a59b4f9-6dfe-4a4e-89f3-92182d72a4e7 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:46.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-710" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":376,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:46.177: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:46.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3875" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":18,"skipped":385,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:37.542: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-5442 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-5442 I0416 13:44:37.606907 18 runners.go:193] Created replication controller with name: externalname-service, namespace: services-5442, replica count: 2 I0416 13:44:40.657840 18 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 13:44:40.657: INFO: Creating new exec pod Apr 16 13:44:43.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5442 exec execpodn6876 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 16 13:44:43.834: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 16 13:44:43.834: INFO: stdout: "" Apr 16 13:44:44.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5442 exec execpodn6876 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 16 13:44:44.995: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 16 13:44:44.995: INFO: stdout: "externalname-service-q7nxh" Apr 16 13:44:44.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5442 exec execpodn6876 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.141.196.157 80' Apr 16 13:44:45.145: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.141.196.157 80\nConnection to 10.141.196.157 80 port [tcp/http] succeeded!\n" Apr 16 13:44:45.145: INFO: stdout: "externalname-service-86x62" Apr 16 13:44:45.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5442 exec execpodn6876 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 32241' Apr 16 13:44:47.293: INFO: stderr: "+ nc -v -t -w 2 172.18.0.4 32241\n+ echo hostName\nConnection to 172.18.0.4 32241 port [tcp/*] succeeded!\n" Apr 16 13:44:47.293: INFO: stdout: "" Apr 16 13:44:48.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5442 exec execpodn6876 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 32241' Apr 16 13:44:50.448: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 32241\nConnection to 172.18.0.4 32241 port [tcp/*] succeeded!\n" Apr 16 13:44:50.448: INFO: stdout: "" Apr 16 13:44:51.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5442 exec execpodn6876 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 32241' Apr 16 13:44:51.520: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 32241\nConnection to 172.18.0.4 32241 port [tcp/*] succeeded!\n" Apr 16 13:44:51.520: INFO: stdout: "externalname-service-q7nxh" Apr 16 13:44:51.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5442 exec execpodn6876 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 32241' Apr 16 13:44:51.812: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 32241\nConnection to 172.18.0.7 32241 port [tcp/*] succeeded!\n" Apr 16 13:44:51.813: INFO: stdout: "externalname-service-q7nxh" Apr 16 13:44:51.813: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:44:51.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5442" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":30,"skipped":741,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:46.250: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod liveness-e62b6d44-74f1-42dc-a87f-a58d2812dc7d in namespace container-probe-1425 Apr 16 13:44:48.299: INFO: Started pod liveness-e62b6d44-74f1-42dc-a87f-a58d2812dc7d in namespace container-probe-1425 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 16 13:44:48.301: INFO: Initial restart count of pod liveness-e62b6d44-74f1-42dc-a87f-a58d2812dc7d is 0 Apr 16 13:45:08.365: INFO: Restart count of pod container-probe-1425/liveness-e62b6d44-74f1-42dc-a87f-a58d2812dc7d is now 1 (20.064124868s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:08.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-1425" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":408,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:08.395: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod Apr 16 13:45:08.431: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:10.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-1035" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":20,"skipped":411,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:10.534: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod with the kernel.shm_rmid_forced sysctl �[1mSTEP�[0m: Watching for error events or started pod �[1mSTEP�[0m: Waiting for pod completion �[1mSTEP�[0m: Checking that the pod succeeded �[1mSTEP�[0m: Getting logs from the pod �[1mSTEP�[0m: Checking that the sysctl is actually updated [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:12.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-9623" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":21,"skipped":415,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:12.667: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a secret �[1mSTEP�[0m: listing secrets in all namespaces to ensure that there are more than zero �[1mSTEP�[0m: patching the secret �[1mSTEP�[0m: deleting the secret using a LabelSelector �[1mSTEP�[0m: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:12.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-5821" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":22,"skipped":458,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:51.960: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-jrrs �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 16 13:44:52.045: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jrrs" in namespace "subpath-1431" to be "Succeeded or Failed" Apr 16 13:44:52.066: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Pending", Reason="", readiness=false. Elapsed: 20.706119ms Apr 16 13:44:54.076: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 2.030952078s Apr 16 13:44:56.083: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 4.037628982s Apr 16 13:44:58.094: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 6.048993641s Apr 16 13:45:00.101: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 8.055182507s Apr 16 13:45:02.104: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 10.059041043s Apr 16 13:45:04.108: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 12.062915973s Apr 16 13:45:06.113: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 14.067825122s Apr 16 13:45:08.118: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 16.072796728s Apr 16 13:45:10.123: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 18.077297036s Apr 16 13:45:12.127: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Running", Reason="", readiness=true. Elapsed: 20.08132169s Apr 16 13:45:14.132: INFO: Pod "pod-subpath-test-configmap-jrrs": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.086431148s �[1mSTEP�[0m: Saw pod success Apr 16 13:45:14.132: INFO: Pod "pod-subpath-test-configmap-jrrs" satisfied condition "Succeeded or Failed" Apr 16 13:45:14.137: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-subpath-test-configmap-jrrs container test-container-subpath-configmap-jrrs: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:45:14.154: INFO: Waiting for pod pod-subpath-test-configmap-jrrs to disappear Apr 16 13:45:14.156: INFO: Pod pod-subpath-test-configmap-jrrs no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-jrrs Apr 16 13:45:14.156: INFO: Deleting pod "pod-subpath-test-configmap-jrrs" in namespace "subpath-1431" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:14.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-1431" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":31,"skipped":768,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:14.214: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should have Endpoints and EndpointSlices pointing to API Server [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:45:14.253: INFO: Endpoints addresses: [172.18.0.9] , ports: [6443] Apr 16 13:45:14.253: INFO: EndpointSlices addresses: [172.18.0.9] , ports: [6443] [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:14.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-6862" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":-1,"completed":32,"skipped":798,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:12.845: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-2e3f8ba1-2544-4503-9639-63671fb644d0 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:45:12.905: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00b89dc7-345c-4b3a-b835-ee6a63dfdccd" in namespace "projected-9645" to be "Succeeded or Failed" Apr 16 13:45:12.916: INFO: Pod "pod-projected-configmaps-00b89dc7-345c-4b3a-b835-ee6a63dfdccd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.306933ms Apr 16 13:45:14.920: INFO: Pod "pod-projected-configmaps-00b89dc7-345c-4b3a-b835-ee6a63dfdccd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01424669s �[1mSTEP�[0m: Saw pod success Apr 16 13:45:14.920: INFO: Pod "pod-projected-configmaps-00b89dc7-345c-4b3a-b835-ee6a63dfdccd" satisfied condition "Succeeded or Failed" Apr 16 13:45:14.922: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-projected-configmaps-00b89dc7-345c-4b3a-b835-ee6a63dfdccd container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:45:14.938: INFO: Waiting for pod pod-projected-configmaps-00b89dc7-345c-4b3a-b835-ee6a63dfdccd to disappear Apr 16 13:45:14.943: INFO: Pod pod-projected-configmaps-00b89dc7-345c-4b3a-b835-ee6a63dfdccd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:14.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9645" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":521,"failed":1,"failures":["[sig-network] DNS should provide DNS for services [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:14.294: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:45:14.335: INFO: The status of Pod server-envvars-5e722341-bc04-4ce6-a760-15e62c156c97 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:45:16.340: INFO: The status of Pod server-envvars-5e722341-bc04-4ce6-a760-15e62c156c97 is Running (Ready = true) Apr 16 13:45:16.364: INFO: Waiting up to 5m0s for pod "client-envvars-9fe2df1e-f387-4fb3-a06f-adb2b2206884" in namespace "pods-1128" to be "Succeeded or Failed" Apr 16 13:45:16.377: INFO: Pod "client-envvars-9fe2df1e-f387-4fb3-a06f-adb2b2206884": Phase="Pending", Reason="", readiness=false. Elapsed: 12.922998ms Apr 16 13:45:18.382: INFO: Pod "client-envvars-9fe2df1e-f387-4fb3-a06f-adb2b2206884": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01785668s �[1mSTEP�[0m: Saw pod success Apr 16 13:45:18.382: INFO: Pod "client-envvars-9fe2df1e-f387-4fb3-a06f-adb2b2206884" satisfied condition "Succeeded or Failed" Apr 16 13:45:18.385: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod client-envvars-9fe2df1e-f387-4fb3-a06f-adb2b2206884 container env3cont: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:45:18.402: INFO: Waiting for pod client-envvars-9fe2df1e-f387-4fb3-a06f-adb2b2206884 to disappear Apr 16 13:45:18.405: INFO: Pod client-envvars-9fe2df1e-f387-4fb3-a06f-adb2b2206884 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:18.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1128" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":818,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:18.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-7334 �[1mSTEP�[0m: changing the ExternalName service to type=ClusterIP �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-7334 I0416 13:45:18.508180 18 runners.go:193] Created replication controller with name: externalname-service, namespace: services-7334, replica count: 2 I0416 13:45:21.559282 18 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 13:45:21.559: INFO: Creating new exec pod Apr 16 13:45:28.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7334 exec execpodwbdx5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 16 13:45:28.846: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 16 13:45:28.846: INFO: stdout: "" Apr 16 13:45:29.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7334 exec execpodwbdx5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 16 13:45:30.172: INFO: stderr: "+ nc -v -t -w 2 externalname-service 80\n+ echo hostName\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 16 13:45:30.172: INFO: stdout: "externalname-service-jbqqd" Apr 16 13:45:30.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7334 exec execpodwbdx5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.164.240 80' Apr 16 13:45:30.467: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.164.240 80\nConnection to 10.140.164.240 80 port [tcp/http] succeeded!\n" Apr 16 13:45:30.467: INFO: stdout: "externalname-service-8rrtv" Apr 16 13:45:30.467: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:30.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7334" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":34,"skipped":834,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:41:30.590: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating pod busybox-14eb729c-e38b-4a7d-af3c-0d35c9cbbb4f in namespace container-probe-900 Apr 16 13:41:32.633: INFO: Started pod busybox-14eb729c-e38b-4a7d-af3c-0d35c9cbbb4f in namespace container-probe-900 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 16 13:41:32.636: INFO: Initial restart count of pod busybox-14eb729c-e38b-4a7d-af3c-0d35c9cbbb4f is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:33.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-900" for this suite. �[32m• [SLOW TEST:242.654 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":571,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:44:45.604: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:45.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3585" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":658,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:33.324: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 16 13:45:33.366: INFO: The status of Pod pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:45:35.371: INFO: The status of Pod pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:45:37.378: INFO: The status of Pod pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:45:39.376: INFO: The status of Pod pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:45:41.374: INFO: The status of Pod pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:45:43.386: INFO: The status of Pod pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Apr 16 13:45:43.908: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907" Apr 16 13:45:43.908: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907" in namespace "pods-9314" to be "terminated due to deadline exceeded" Apr 16 13:45:43.912: INFO: Pod "pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907": Phase="Running", Reason="", readiness=true. Elapsed: 3.472414ms Apr 16 13:45:45.919: INFO: Pod "pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.010444228s Apr 16 13:45:45.919: INFO: Pod "pod-update-activedeadlineseconds-f471f15b-b421-4013-a135-ceaa48a8a907" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:45.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-9314" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":619,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:45.904: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:52.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-1853" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":38,"skipped":743,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:46.238: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 16 13:45:46.295: INFO: Waiting up to 5m0s for pod "downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19" in namespace "downward-api-6691" to be "Succeeded or Failed" Apr 16 13:45:46.300: INFO: Pod "downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.53209ms Apr 16 13:45:48.305: INFO: Pod "downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009503224s Apr 16 13:45:50.311: INFO: Pod "downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015750417s Apr 16 13:45:52.319: INFO: Pod "downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023437053s Apr 16 13:45:54.324: INFO: Pod "downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.028562531s �[1mSTEP�[0m: Saw pod success Apr 16 13:45:54.324: INFO: Pod "downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19" satisfied condition "Succeeded or Failed" Apr 16 13:45:54.327: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:45:54.356: INFO: Waiting for pod downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19 to disappear Apr 16 13:45:54.362: INFO: Pod downward-api-046fe335-dfa5-4cf9-bb25-71f0e71c0b19 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:54.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6691" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":765,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:54.446: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:45:54.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-7766" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":790,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:54.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod Apr 16 13:45:54.635: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:01.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-7527" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":29,"skipped":811,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:53.036: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating server pod server in namespace prestop-277 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-277 �[1mSTEP�[0m: Deleting pre-stop pod Apr 16 13:46:06.124: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:06.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-277" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":39,"skipped":776,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:02.017: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test env composition Apr 16 13:46:02.091: INFO: Waiting up to 5m0s for pod "var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f" in namespace "var-expansion-8612" to be "Succeeded or Failed" Apr 16 13:46:02.097: INFO: Pod "var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.465863ms Apr 16 13:46:04.101: INFO: Pod "var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010034029s Apr 16 13:46:06.108: INFO: Pod "var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016872114s Apr 16 13:46:08.112: INFO: Pod "var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02105323s �[1mSTEP�[0m: Saw pod success Apr 16 13:46:08.112: INFO: Pod "var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f" satisfied condition "Succeeded or Failed" Apr 16 13:46:08.115: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:46:08.135: INFO: Waiting for pod var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f to disappear Apr 16 13:46:08.137: INFO: Pod var-expansion-07f2c425-f7a0-4388-bb0a-784f51b6853f no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:08.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-8612" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":822,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:15.007: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:45:15.476: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:45:18.496: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:45:18.509: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-9133-crds.webhook.example.com via the AdmissionRegistration API Apr 16 13:45:29.042: INFO: Waiting for webhook configuration to be ready... Apr 16 13:45:39.172: INFO: Waiting for webhook configuration to be ready... Apr 16 13:45:49.259: INFO: Waiting for webhook configuration to be ready... Apr 16 13:45:59.365: INFO: Waiting for webhook configuration to be ready... Apr 16 13:46:09.381: INFO: Waiting for webhook configuration to be ready... Apr 16 13:46:09.382: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002c42a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForCustomResource(0xc0005ef600, {0xc002daf8e0, 0xc}, 0xc004614280, 0xc0040fe640, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1804 +0xc85 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:294 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005724e0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:09.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1519" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1519-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.994 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate custom resource [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 16 13:46:09.382: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002c42a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1804 �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:08.219: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:10.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-7082" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":31,"skipped":865,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:10.389: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should test the lifecycle of a ReplicationController [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a ReplicationController �[1mSTEP�[0m: waiting for RC to be added �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: patching ReplicationController �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: patching ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for available Replicas �[1mSTEP�[0m: fetching ReplicationController status �[1mSTEP�[0m: patching ReplicationController scale �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: waiting for ReplicationController's scale to be the max amount �[1mSTEP�[0m: fetching ReplicationController; ensuring that it's patched �[1mSTEP�[0m: updating ReplicationController status �[1mSTEP�[0m: waiting for RC to be modified �[1mSTEP�[0m: listing all ReplicationControllers �[1mSTEP�[0m: checking that ReplicationController has expected values �[1mSTEP�[0m: deleting ReplicationControllers by collection �[1mSTEP�[0m: waiting for ReplicationController to have a DELETED watchEvent [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:14.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-1740" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":-1,"completed":32,"skipped":869,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:14.087: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 16 13:46:14.150: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 16 13:46:14.154: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 16 13:46:14.168: INFO: waiting for watch events with expected annotations Apr 16 13:46:14.168: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:14.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-6560" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":33,"skipped":874,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:14.263: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:14.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-189" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":34,"skipped":890,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:06.207: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Apr 16 13:46:07.037: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:46:07.059: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 13:46:09.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:46:11.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 46, 7, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:46:14.094: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:46:15.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8556" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8556-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":40,"skipped":780,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:45:30.599: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a ReplaceConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring the job is replaced with a new one �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:00.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-8929" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":23,"skipped":553,"failed":2,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:10.003: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:46:10.574: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 13:46:12.587: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 46, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 46, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 46, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 46, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:46:15.608: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:46:15.613: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-1805-crds.webhook.example.com via the AdmissionRegistration API Apr 16 13:46:26.138: INFO: Waiting for webhook configuration to be ready... Apr 16 13:46:36.249: INFO: Waiting for webhook configuration to be ready... Apr 16 13:46:46.352: INFO: Waiting for webhook configuration to be ready... Apr 16 13:46:56.450: INFO: Waiting for webhook configuration to be ready... Apr 16 13:47:06.500: INFO: Waiting for webhook configuration to be ready... Apr 16 13:47:06.500: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002c42a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForCustomResource(0xc0005ef600, {0xc004635620, 0xc}, 0xc004478a00, 0xc004389fc0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1804 +0xc85 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:294 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x23f7fb7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2371919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0005724e0, 0x71566f0) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:07.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6141" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6141-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [57.653 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate custom resource [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[91mApr 16 13:47:06.500: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002c42a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1804 �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":35,"skipped":859,"failed":0} [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:00.731: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc1 �[1mSTEP�[0m: create the rc2 �[1mSTEP�[0m: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well �[1mSTEP�[0m: delete the rc simpletest-rc-to-be-deleted �[1mSTEP�[0m: wait for the rc to be deleted Apr 16 13:47:12.626: INFO: 65 pods remaining Apr 16 13:47:12.626: INFO: 65 pods has nil DeletionTimestamp Apr 16 13:47:12.626: INFO: �[1mSTEP�[0m: Gathering metrics Apr 16 13:47:17.620: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb is Running (Ready = true) Apr 16 13:47:17.775: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 16 13:47:17.775: INFO: Deleting pod "simpletest-rc-to-be-deleted-22gxp" in namespace "gc-6733" Apr 16 13:47:17.783: INFO: Deleting pod "simpletest-rc-to-be-deleted-272dh" in namespace "gc-6733" Apr 16 13:47:17.799: INFO: Deleting pod "simpletest-rc-to-be-deleted-2q72g" in namespace "gc-6733" Apr 16 13:47:17.827: INFO: Deleting pod "simpletest-rc-to-be-deleted-2r7sn" in namespace "gc-6733" Apr 16 13:47:17.840: INFO: Deleting pod "simpletest-rc-to-be-deleted-47x2p" in namespace "gc-6733" Apr 16 13:47:17.849: INFO: Deleting pod "simpletest-rc-to-be-deleted-4fvbb" in namespace "gc-6733" Apr 16 13:47:17.862: INFO: Deleting pod "simpletest-rc-to-be-deleted-4qvs7" in namespace "gc-6733" Apr 16 13:47:17.887: INFO: Deleting pod "simpletest-rc-to-be-deleted-4tn75" in namespace "gc-6733" Apr 16 13:47:17.904: INFO: Deleting pod "simpletest-rc-to-be-deleted-5dngm" in namespace "gc-6733" Apr 16 13:47:17.923: INFO: Deleting pod "simpletest-rc-to-be-deleted-62p7t" in namespace "gc-6733" Apr 16 13:47:17.935: INFO: Deleting pod "simpletest-rc-to-be-deleted-657ml" in namespace "gc-6733" Apr 16 13:47:17.957: INFO: Deleting pod "simpletest-rc-to-be-deleted-6w2tz" in namespace "gc-6733" Apr 16 13:47:17.986: INFO: Deleting pod "simpletest-rc-to-be-deleted-7m687" in namespace "gc-6733" Apr 16 13:47:18.002: INFO: Deleting pod "simpletest-rc-to-be-deleted-7nbbv" in namespace "gc-6733" Apr 16 13:47:18.035: INFO: Deleting pod "simpletest-rc-to-be-deleted-7rt25" in namespace "gc-6733" Apr 16 13:47:18.083: INFO: Deleting pod "simpletest-rc-to-be-deleted-7tm9d" in namespace "gc-6733" Apr 16 13:47:18.104: INFO: Deleting pod "simpletest-rc-to-be-deleted-7zlv7" in namespace "gc-6733" Apr 16 13:47:18.145: INFO: Deleting pod "simpletest-rc-to-be-deleted-8jbb5" in namespace "gc-6733" Apr 16 13:47:18.189: INFO: Deleting pod "simpletest-rc-to-be-deleted-9v89r" in namespace "gc-6733" Apr 16 13:47:18.250: INFO: Deleting pod "simpletest-rc-to-be-deleted-cg8pw" in namespace "gc-6733" Apr 16 13:47:18.269: INFO: Deleting pod "simpletest-rc-to-be-deleted-cgtqj" in namespace "gc-6733" Apr 16 13:47:18.293: INFO: Deleting pod "simpletest-rc-to-be-deleted-crj9w" in namespace "gc-6733" Apr 16 13:47:18.318: INFO: Deleting pod "simpletest-rc-to-be-deleted-d465s" in namespace "gc-6733" Apr 16 13:47:18.341: INFO: Deleting pod "simpletest-rc-to-be-deleted-ddx94" in namespace "gc-6733" Apr 16 13:47:18.384: INFO: Deleting pod "simpletest-rc-to-be-deleted-df5zj" in namespace "gc-6733" Apr 16 13:47:18.433: INFO: Deleting pod "simpletest-rc-to-be-deleted-drftf" in namespace "gc-6733" Apr 16 13:47:18.464: INFO: Deleting pod "simpletest-rc-to-be-deleted-dstrn" in namespace "gc-6733" Apr 16 13:47:18.474: INFO: Deleting pod "simpletest-rc-to-be-deleted-f5rvz" in namespace "gc-6733" Apr 16 13:47:18.492: INFO: Deleting pod "simpletest-rc-to-be-deleted-f6h56" in namespace "gc-6733" Apr 16 13:47:18.519: INFO: Deleting pod "simpletest-rc-to-be-deleted-fjckp" in namespace "gc-6733" Apr 16 13:47:18.535: INFO: Deleting pod "simpletest-rc-to-be-deleted-fxldn" in namespace "gc-6733" Apr 16 13:47:18.546: INFO: Deleting pod "simpletest-rc-to-be-deleted-fznfg" in namespace "gc-6733" Apr 16 13:47:18.573: INFO: Deleting pod "simpletest-rc-to-be-deleted-fzzht" in namespace "gc-6733" Apr 16 13:47:18.601: INFO: Deleting pod "simpletest-rc-to-be-deleted-g696w" in namespace "gc-6733" Apr 16 13:47:18.634: INFO: Deleting pod "simpletest-rc-to-be-deleted-gfndf" in namespace "gc-6733" Apr 16 13:47:18.671: INFO: Deleting pod "simpletest-rc-to-be-deleted-gl44b" in namespace "gc-6733" Apr 16 13:47:18.699: INFO: Deleting pod "simpletest-rc-to-be-deleted-gs82s" in namespace "gc-6733" Apr 16 13:47:18.744: INFO: Deleting pod "simpletest-rc-to-be-deleted-hjxr4" in namespace "gc-6733" Apr 16 13:47:18.781: INFO: Deleting pod "simpletest-rc-to-be-deleted-hmdzs" in namespace "gc-6733" Apr 16 13:47:18.831: INFO: Deleting pod "simpletest-rc-to-be-deleted-hn7m8" in namespace "gc-6733" Apr 16 13:47:18.884: INFO: Deleting pod "simpletest-rc-to-be-deleted-hqc85" in namespace "gc-6733" Apr 16 13:47:18.908: INFO: Deleting pod "simpletest-rc-to-be-deleted-hsvnr" in namespace "gc-6733" Apr 16 13:47:18.934: INFO: Deleting pod "simpletest-rc-to-be-deleted-hvbww" in namespace "gc-6733" Apr 16 13:47:18.952: INFO: Deleting pod "simpletest-rc-to-be-deleted-hwb98" in namespace "gc-6733" Apr 16 13:47:19.015: INFO: Deleting pod "simpletest-rc-to-be-deleted-js44f" in namespace "gc-6733" Apr 16 13:47:19.060: INFO: Deleting pod "simpletest-rc-to-be-deleted-k4ssk" in namespace "gc-6733" Apr 16 13:47:19.086: INFO: Deleting pod "simpletest-rc-to-be-deleted-k4xj8" in namespace "gc-6733" Apr 16 13:47:19.104: INFO: Deleting pod "simpletest-rc-to-be-deleted-kb65x" in namespace "gc-6733" Apr 16 13:47:19.128: INFO: Deleting pod "simpletest-rc-to-be-deleted-kcrld" in namespace "gc-6733" Apr 16 13:47:19.163: INFO: Deleting pod "simpletest-rc-to-be-deleted-kdq5r" in namespace "gc-6733" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:19.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-6733" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":-1,"completed":36,"skipped":859,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":23,"skipped":553,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:07.659: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:47:09.646: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 13:47:11.799: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:47:13.819: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:47:15.809: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:47:17.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 16 13:47:19.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 47, 9, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:47:22.833: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:47:22.836: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-7820-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:25.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6716" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6716-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":24,"skipped":553,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:26.032: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating the pod Apr 16 13:47:26.148: INFO: The status of Pod annotationupdate10d9aff0-e1d9-430a-9b98-2d82b7b35611 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:47:28.152: INFO: The status of Pod annotationupdate10d9aff0-e1d9-430a-9b98-2d82b7b35611 is Running (Ready = true) Apr 16 13:47:28.674: INFO: Successfully updated pod "annotationupdate10d9aff0-e1d9-430a-9b98-2d82b7b35611" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:32.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1687" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":553,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:19.395: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 16 13:47:19.504: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:47:24.379: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:35.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9445" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":37,"skipped":901,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:35.635: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test service account token: Apr 16 13:47:35.679: INFO: Waiting up to 5m0s for pod "test-pod-74c186d4-3da4-4bb0-9fc3-2013c576003b" in namespace "svcaccounts-619" to be "Succeeded or Failed" Apr 16 13:47:35.682: INFO: Pod "test-pod-74c186d4-3da4-4bb0-9fc3-2013c576003b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.77394ms Apr 16 13:47:37.686: INFO: Pod "test-pod-74c186d4-3da4-4bb0-9fc3-2013c576003b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006562309s �[1mSTEP�[0m: Saw pod success Apr 16 13:47:37.686: INFO: Pod "test-pod-74c186d4-3da4-4bb0-9fc3-2013c576003b" satisfied condition "Succeeded or Failed" Apr 16 13:47:37.689: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod test-pod-74c186d4-3da4-4bb0-9fc3-2013c576003b container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:47:37.712: INFO: Waiting for pod test-pod-74c186d4-3da4-4bb0-9fc3-2013c576003b to disappear Apr 16 13:47:37.721: INFO: Pod test-pod-74c186d4-3da4-4bb0-9fc3-2013c576003b no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:37.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-619" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":38,"skipped":935,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:37.741: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name configmap-projected-all-test-volume-473faf4a-c0c0-4796-a01d-76cb3157995c �[1mSTEP�[0m: Creating secret with name secret-projected-all-test-volume-a3d53a80-0368-4e8c-9cec-a558132310f0 �[1mSTEP�[0m: Creating a pod to test Check all projections for projected volume plugin Apr 16 13:47:37.793: INFO: Waiting up to 5m0s for pod "projected-volume-ae2be8f8-496b-4da1-953b-025436ee2a1d" in namespace "projected-3219" to be "Succeeded or Failed" Apr 16 13:47:37.800: INFO: Pod "projected-volume-ae2be8f8-496b-4da1-953b-025436ee2a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.173075ms Apr 16 13:47:39.805: INFO: Pod "projected-volume-ae2be8f8-496b-4da1-953b-025436ee2a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011038266s �[1mSTEP�[0m: Saw pod success Apr 16 13:47:39.805: INFO: Pod "projected-volume-ae2be8f8-496b-4da1-953b-025436ee2a1d" satisfied condition "Succeeded or Failed" Apr 16 13:47:39.809: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod projected-volume-ae2be8f8-496b-4da1-953b-025436ee2a1d container projected-all-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:47:39.825: INFO: Waiting for pod projected-volume-ae2be8f8-496b-4da1-953b-025436ee2a1d to disappear Apr 16 13:47:39.828: INFO: Pod projected-volume-ae2be8f8-496b-4da1-953b-025436ee2a1d no longer exists [AfterEach] [sig-storage] Projected combined /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:39.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3219" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":939,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:32.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a service clusterip-service with the type=ClusterIP in namespace services-8337 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-8337 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-8337 I0416 13:47:32.810404 16 runners.go:193] Created replication controller with name: externalsvc, namespace: services-8337, replica count: 2 I0416 13:47:35.864641 16 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the ClusterIP service to type=ExternalName Apr 16 13:47:35.882: INFO: Creating new exec pod Apr 16 13:47:37.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8337 exec execpodhrdps -- /bin/sh -x -c nslookup clusterip-service.services-8337.svc.cluster.local' Apr 16 13:47:38.383: INFO: stderr: "+ nslookup clusterip-service.services-8337.svc.cluster.local\n" Apr 16 13:47:38.383: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nclusterip-service.services-8337.svc.cluster.local\tcanonical name = externalsvc.services-8337.svc.cluster.local.\nName:\texternalsvc.services-8337.svc.cluster.local\nAddress: 10.130.1.87\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-8337, will wait for the garbage collector to delete the pods Apr 16 13:47:38.445: INFO: Deleting ReplicationController externalsvc took: 6.801264ms Apr 16 13:47:38.545: INFO: Terminating ReplicationController externalsvc pods took: 100.197599ms Apr 16 13:47:40.665: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:40.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8337" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":26,"skipped":566,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:40.757: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:40.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-9495" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":27,"skipped":599,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:39.841: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:47:39.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3040e09-d01b-41eb-b624-e95242f3e0a5" in namespace "downward-api-5690" to be "Succeeded or Failed" Apr 16 13:47:39.883: INFO: Pod "downwardapi-volume-f3040e09-d01b-41eb-b624-e95242f3e0a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025306ms Apr 16 13:47:41.890: INFO: Pod "downwardapi-volume-f3040e09-d01b-41eb-b624-e95242f3e0a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009719092s �[1mSTEP�[0m: Saw pod success Apr 16 13:47:41.890: INFO: Pod "downwardapi-volume-f3040e09-d01b-41eb-b624-e95242f3e0a5" satisfied condition "Succeeded or Failed" Apr 16 13:47:41.893: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-e11j1x pod downwardapi-volume-f3040e09-d01b-41eb-b624-e95242f3e0a5 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:47:41.922: INFO: Waiting for pod downwardapi-volume-f3040e09-d01b-41eb-b624-e95242f3e0a5 to disappear Apr 16 13:47:41.926: INFO: Pod downwardapi-volume-f3040e09-d01b-41eb-b624-e95242f3e0a5 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:41.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5690" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":941,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:40.939: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-225b7ffd-4d3a-41fe-a036-376471a73c38 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:47:40.997: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-29a130bd-9187-4b9e-820a-125db9653775" in namespace "projected-7865" to be "Succeeded or Failed" Apr 16 13:47:41.001: INFO: Pod "pod-projected-secrets-29a130bd-9187-4b9e-820a-125db9653775": Phase="Pending", Reason="", readiness=false. Elapsed: 4.003524ms Apr 16 13:47:43.004: INFO: Pod "pod-projected-secrets-29a130bd-9187-4b9e-820a-125db9653775": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007224389s �[1mSTEP�[0m: Saw pod success Apr 16 13:47:43.004: INFO: Pod "pod-projected-secrets-29a130bd-9187-4b9e-820a-125db9653775" satisfied condition "Succeeded or Failed" Apr 16 13:47:43.006: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-projected-secrets-29a130bd-9187-4b9e-820a-125db9653775 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:47:43.026: INFO: Waiting for pod pod-projected-secrets-29a130bd-9187-4b9e-820a-125db9653775 to disappear Apr 16 13:47:43.028: INFO: Pod pod-projected-secrets-29a130bd-9187-4b9e-820a-125db9653775 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:43.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7865" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":607,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:41.943: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Apr 16 13:47:42.003: INFO: Waiting up to 5m0s for pod "pod-4c446327-41aa-4ee6-a726-a5cb18cd6ac7" in namespace "emptydir-3287" to be "Succeeded or Failed" Apr 16 13:47:42.007: INFO: Pod "pod-4c446327-41aa-4ee6-a726-a5cb18cd6ac7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.662451ms Apr 16 13:47:44.021: INFO: Pod "pod-4c446327-41aa-4ee6-a726-a5cb18cd6ac7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.018197758s �[1mSTEP�[0m: Saw pod success Apr 16 13:47:44.021: INFO: Pod "pod-4c446327-41aa-4ee6-a726-a5cb18cd6ac7" satisfied condition "Succeeded or Failed" Apr 16 13:47:44.025: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-s4hc7 pod pod-4c446327-41aa-4ee6-a726-a5cb18cd6ac7 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:47:44.042: INFO: Waiting for pod pod-4c446327-41aa-4ee6-a726-a5cb18cd6ac7 to disappear Apr 16 13:47:44.046: INFO: Pod pod-4c446327-41aa-4ee6-a726-a5cb18cd6ac7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:44.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3287" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":944,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:43.084: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod Apr 16 13:47:43.127: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:45.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-9454" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":29,"skipped":630,"failed":3,"failures":["[sig-network] DNS should provide DNS for services [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:44.101: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:47:44.167: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"2bde1854-8bdf-4b68-93b8-912308a346a1", Controller:(*bool)(0xc00481f446), BlockOwnerDeletion:(*bool)(0xc00481f447)}} Apr 16 13:47:44.177: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ed6b36eb-7532-4e8d-a78f-bdef893ebd40", Controller:(*bool)(0xc0048629fe), BlockOwnerDeletion:(*bool)(0xc0048629ff)}} Apr 16 13:47:44.199: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"0fb9a6ad-e8a3-43ff-822e-cb9540113872", Controller:(*bool)(0xc00489eb16), BlockOwnerDeletion:(*bool)(0xc00489eb17)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:49.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-4477" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":42,"skipped":970,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:49.260: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Apr 16 13:47:49.311: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:49.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-8118" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":43,"skipped":987,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:49.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-9448 �[1mSTEP�[0m: creating replication controller nodeport-test in namespace services-9448 I0416 13:47:49.424873 18 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-9448, replica count: 2 I0416 13:47:52.476424 18 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 13:47:52.476: INFO: Creating new exec pod Apr 16 13:47:55.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9448 exec execpodh9sm9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Apr 16 13:47:55.647: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Apr 16 13:47:55.647: INFO: stdout: "nodeport-test-wc6cp" Apr 16 13:47:55.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9448 exec execpodh9sm9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.99.228 80' Apr 16 13:47:55.873: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.99.228 80\nConnection to 10.140.99.228 80 port [tcp/http] succeeded!\n" Apr 16 13:47:55.873: INFO: stdout: "nodeport-test-wc6cp" Apr 16 13:47:55.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9448 exec execpodh9sm9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 30564' Apr 16 13:47:56.095: INFO: stderr: "+ + nc -v -techo -w 2 hostName 172.18.0.7\n 30564\nConnection to 172.18.0.7 30564 port [tcp/*] succeeded!\n" Apr 16 13:47:56.095: INFO: stdout: "" Apr 16 13:47:57.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9448 exec execpodh9sm9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 30564' Apr 16 13:47:57.243: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 30564\nConnection to 172.18.0.7 30564 port [tcp/*] succeeded!\n" Apr 16 13:47:57.243: INFO: stdout: "nodeport-test-wc6cp" Apr 16 13:47:57.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9448 exec execpodh9sm9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 30564' Apr 16 13:47:57.421: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 30564\nConnection to 172.18.0.6 30564 port [tcp/*] succeeded!\n" Apr 16 13:47:57.421: INFO: stdout: "" Apr 16 13:47:58.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9448 exec execpodh9sm9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 30564' Apr 16 13:47:58.566: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 30564\nConnection to 172.18.0.6 30564 port [tcp/*] succeeded!\n" Apr 16 13:47:58.566: INFO: stdout: "nodeport-test-k7rgc" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:47:58.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9448" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":44,"skipped":1005,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:47:58.583: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Service �[1mSTEP�[0m: Creating a NodePort Service �[1mSTEP�[0m: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota �[1mSTEP�[0m: Ensuring resource quota status captures service creation �[1mSTEP�[0m: Deleting Services �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:09.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-7950" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":45,"skipped":1008,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:09.873: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should delete a collection of services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a collection of services Apr 16 13:48:09.908: INFO: Creating e2e-svc-a-w5ndw Apr 16 13:48:09.919: INFO: Creating e2e-svc-b-27jpz Apr 16 13:48:09.947: INFO: Creating e2e-svc-c-8b5hr �[1mSTEP�[0m: deleting service collection Apr 16 13:48:10.017: INFO: Collection of services has been deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:10.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8892" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":46,"skipped":1039,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:10.058: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-map-c6cc097f-af7c-47e0-85d6-8f4864f5225f �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:48:10.099: INFO: Waiting up to 5m0s for pod "pod-secrets-4091cae5-5ac6-41c8-be98-5acf8b646247" in namespace "secrets-4587" to be "Succeeded or Failed" Apr 16 13:48:10.103: INFO: Pod "pod-secrets-4091cae5-5ac6-41c8-be98-5acf8b646247": Phase="Pending", Reason="", readiness=false. Elapsed: 3.445265ms Apr 16 13:48:12.108: INFO: Pod "pod-secrets-4091cae5-5ac6-41c8-be98-5acf8b646247": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008184471s �[1mSTEP�[0m: Saw pod success Apr 16 13:48:12.108: INFO: Pod "pod-secrets-4091cae5-5ac6-41c8-be98-5acf8b646247" satisfied condition "Succeeded or Failed" Apr 16 13:48:12.110: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-secrets-4091cae5-5ac6-41c8-be98-5acf8b646247 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:48:12.124: INFO: Waiting for pod pod-secrets-4091cae5-5ac6-41c8-be98-5acf8b646247 to disappear Apr 16 13:48:12.127: INFO: Pod pod-secrets-4091cae5-5ac6-41c8-be98-5acf8b646247 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:12.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4587" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":1052,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:12.214: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Apr 16 13:48:12.260: INFO: Waiting up to 5m0s for pod "pod-1d4a0f90-8059-4403-895e-cbdc49f00f28" in namespace "emptydir-1799" to be "Succeeded or Failed" Apr 16 13:48:12.264: INFO: Pod "pod-1d4a0f90-8059-4403-895e-cbdc49f00f28": Phase="Pending", Reason="", readiness=false. Elapsed: 3.914741ms Apr 16 13:48:14.269: INFO: Pod "pod-1d4a0f90-8059-4403-895e-cbdc49f00f28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008671928s �[1mSTEP�[0m: Saw pod success Apr 16 13:48:14.269: INFO: Pod "pod-1d4a0f90-8059-4403-895e-cbdc49f00f28" satisfied condition "Succeeded or Failed" Apr 16 13:48:14.272: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-1d4a0f90-8059-4403-895e-cbdc49f00f28 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:48:14.287: INFO: Waiting for pod pod-1d4a0f90-8059-4403-895e-cbdc49f00f28 to disappear Apr 16 13:48:14.289: INFO: Pod pod-1d4a0f90-8059-4403-895e-cbdc49f00f28 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:14.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1799" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1107,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:14.359: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 16 13:48:14.399: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7adbc437-2407-41cc-8dad-10bc06d0da60" in namespace "projected-8520" to be "Succeeded or Failed" Apr 16 13:48:14.404: INFO: Pod "downwardapi-volume-7adbc437-2407-41cc-8dad-10bc06d0da60": Phase="Pending", Reason="", readiness=false. Elapsed: 3.966561ms Apr 16 13:48:16.408: INFO: Pod "downwardapi-volume-7adbc437-2407-41cc-8dad-10bc06d0da60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008313005s �[1mSTEP�[0m: Saw pod success Apr 16 13:48:16.408: INFO: Pod "downwardapi-volume-7adbc437-2407-41cc-8dad-10bc06d0da60" satisfied condition "Succeeded or Failed" Apr 16 13:48:16.411: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x pod downwardapi-volume-7adbc437-2407-41cc-8dad-10bc06d0da60 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:48:16.436: INFO: Waiting for pod downwardapi-volume-7adbc437-2407-41cc-8dad-10bc06d0da60 to disappear Apr 16 13:48:16.438: INFO: Pod downwardapi-volume-7adbc437-2407-41cc-8dad-10bc06d0da60 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:16.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8520" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":1143,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:16.461: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:16.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-498" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":50,"skipped":1152,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:16.532: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:48:16.597: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:48:18.602: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:20.602: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:22.602: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:24.601: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:26.602: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:28.602: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:30.601: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:32.602: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:34.601: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:36.601: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = false) Apr 16 13:48:38.602: INFO: The status of Pod test-webserver-37f9d595-4283-4b05-bd36-5f096c9a1686 is Running (Ready = true) Apr 16 13:48:38.606: INFO: Container started at 2022-04-16 13:48:17 +0000 UTC, pod became ready at 2022-04-16 13:48:36 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:38.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-2681" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":1154,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:38.639: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6751.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-6751.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-6751.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-6751.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 16 13:48:40.707: INFO: DNS probes using dns-6751/dns-test-13f6690c-d3dc-466e-b21b-3beca277bf28 succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:40.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-6751" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":52,"skipped":1169,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:40.763: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a replication controller Apr 16 13:48:40.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 create -f -' Apr 16 13:48:41.386: INFO: stderr: "" Apr 16 13:48:41.386: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 16 13:48:41.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 16 13:48:41.510: INFO: stderr: "" Apr 16 13:48:41.510: INFO: stdout: "update-demo-nautilus-8fklb update-demo-nautilus-bsw86 " Apr 16 13:48:41.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods update-demo-nautilus-8fklb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 16 13:48:41.580: INFO: stderr: "" Apr 16 13:48:41.580: INFO: stdout: "" Apr 16 13:48:41.580: INFO: update-demo-nautilus-8fklb is created but not running Apr 16 13:48:46.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 16 13:48:46.650: INFO: stderr: "" Apr 16 13:48:46.651: INFO: stdout: "update-demo-nautilus-8fklb update-demo-nautilus-bsw86 " Apr 16 13:48:46.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods update-demo-nautilus-8fklb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 16 13:48:46.717: INFO: stderr: "" Apr 16 13:48:46.717: INFO: stdout: "true" Apr 16 13:48:46.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods update-demo-nautilus-8fklb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 16 13:48:46.789: INFO: stderr: "" Apr 16 13:48:46.789: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 16 13:48:46.789: INFO: validating pod update-demo-nautilus-8fklb Apr 16 13:48:46.793: INFO: got data: { "image": "nautilus.jpg" } Apr 16 13:48:46.794: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 13:48:46.794: INFO: update-demo-nautilus-8fklb is verified up and running Apr 16 13:48:46.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods update-demo-nautilus-bsw86 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 16 13:48:46.879: INFO: stderr: "" Apr 16 13:48:46.879: INFO: stdout: "true" Apr 16 13:48:46.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods update-demo-nautilus-bsw86 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 16 13:48:46.960: INFO: stderr: "" Apr 16 13:48:46.960: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 16 13:48:46.960: INFO: validating pod update-demo-nautilus-bsw86 Apr 16 13:48:46.964: INFO: got data: { "image": "nautilus.jpg" } Apr 16 13:48:46.964: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 16 13:48:46.964: INFO: update-demo-nautilus-bsw86 is verified up and running �[1mSTEP�[0m: using delete to clean up resources Apr 16 13:48:46.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 delete --grace-period=0 --force -f -' Apr 16 13:48:47.038: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 16 13:48:47.038: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 16 13:48:47.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get rc,svc -l name=update-demo --no-headers' Apr 16 13:48:47.144: INFO: stderr: "No resources found in kubectl-9515 namespace.\n" Apr 16 13:48:47.144: INFO: stdout: "" Apr 16 13:48:47.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9515 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 16 13:48:47.236: INFO: stderr: "" Apr 16 13:48:47.237: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:47.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9515" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":53,"skipped":1193,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:15.497: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Apr 16 13:48:16.090: INFO: Successfully updated pod "var-expansion-e5e54098-eef4-438f-a56c-ae5cd1ea7375" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Apr 16 13:48:18.098: INFO: Deleting pod "var-expansion-e5e54098-eef4-438f-a56c-ae5cd1ea7375" in namespace "var-expansion-3689" Apr 16 13:48:18.104: INFO: Wait up to 5m0s for pod "var-expansion-e5e54098-eef4-438f-a56c-ae5cd1ea7375" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:50.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3689" for this suite. �[32m• [SLOW TEST:154.627 seconds]�[0m [sig-node] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:47.252: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Create a Replicaset �[1mSTEP�[0m: Verify that the required pods have come up. Apr 16 13:48:47.301: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 16 13:48:52.307: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Getting /status Apr 16 13:48:52.312: INFO: Replicaset test-rs has Conditions: [] �[1mSTEP�[0m: updating the Replicaset Status Apr 16 13:48:52.321: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the ReplicaSet status to be updated Apr 16 13:48:52.325: INFO: Observed &ReplicaSet event: ADDED Apr 16 13:48:52.325: INFO: Observed &ReplicaSet event: MODIFIED Apr 16 13:48:52.325: INFO: Observed &ReplicaSet event: MODIFIED Apr 16 13:48:52.325: INFO: Observed &ReplicaSet event: MODIFIED Apr 16 13:48:52.325: INFO: Found replicaset test-rs in namespace replicaset-2046 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 16 13:48:52.325: INFO: Replicaset test-rs has an updated status �[1mSTEP�[0m: patching the Replicaset Status Apr 16 13:48:52.325: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Apr 16 13:48:52.331: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Replicaset status to be patched Apr 16 13:48:52.333: INFO: Observed &ReplicaSet event: ADDED Apr 16 13:48:52.334: INFO: Observed &ReplicaSet event: MODIFIED Apr 16 13:48:52.334: INFO: Observed &ReplicaSet event: MODIFIED Apr 16 13:48:52.334: INFO: Observed &ReplicaSet event: MODIFIED Apr 16 13:48:52.334: INFO: Observed replicaset test-rs in namespace replicaset-2046 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 16 13:48:52.334: INFO: Observed &ReplicaSet event: MODIFIED Apr 16 13:48:52.334: INFO: Found replicaset test-rs in namespace replicaset-2046 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } Apr 16 13:48:52.334: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:48:52.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-2046" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":54,"skipped":1196,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:52.344: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: set up a multi version CRD Apr 16 13:48:52.374: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:49:06.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6976" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":55,"skipped":1196,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":41,"skipped":796,"failed":0} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:48:50.126: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating a watch on configmaps with label A �[1mSTEP�[0m: creating a watch on configmaps with label B �[1mSTEP�[0m: creating a watch on configmaps with label A or B �[1mSTEP�[0m: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 16 13:48:50.162: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15352 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 13:48:50.162: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15352 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A and ensuring the correct watchers observe the notification Apr 16 13:48:50.171: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15353 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 13:48:50.171: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15353 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A again and ensuring the correct watchers observe the notification Apr 16 13:48:50.179: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15355 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 13:48:50.180: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15355 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap A and ensuring the correct watchers observe the notification Apr 16 13:48:50.185: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15357 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 13:48:50.185: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3026 44f71207-ce9a-4390-b319-89316deea2dd 15357 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 16 13:48:50.189: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3026 9fc2131c-3d17-4b17-8129-050fedb90e07 15358 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 13:48:50.189: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3026 9fc2131c-3d17-4b17-8129-050fedb90e07 15358 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap B and ensuring the correct watchers observe the notification Apr 16 13:49:00.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3026 9fc2131c-3d17-4b17-8129-050fedb90e07 15435 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 16 13:49:00.195: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3026 9fc2131c-3d17-4b17-8129-050fedb90e07 15435 0 2022-04-16 13:48:50 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-16 13:48:50 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:49:10.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-3026" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":42,"skipped":796,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:49:06.399: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:49:10.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-2414" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1202,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:49:10.466: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted �[1mSTEP�[0m: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods �[1mSTEP�[0m: Gathering metrics Apr 16 13:49:50.601: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-3a12zq-control-plane-hxgrb is Running (Ready = true) Apr 16 13:49:50.714: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: Apr 16 13:49:50.714: INFO: Deleting pod "simpletest.rc-28p9g" in namespace "gc-3096" Apr 16 13:49:50.722: INFO: Deleting pod "simpletest.rc-2cp4c" in namespace "gc-3096" Apr 16 13:49:50.735: INFO: Deleting pod "simpletest.rc-2zk27" in namespace "gc-3096" Apr 16 13:49:50.745: INFO: Deleting pod "simpletest.rc-44hcr" in namespace "gc-3096" Apr 16 13:49:50.752: INFO: Deleting pod "simpletest.rc-4lqsl" in namespace "gc-3096" Apr 16 13:49:50.762: INFO: Deleting pod "simpletest.rc-4rqw9" in namespace "gc-3096" Apr 16 13:49:50.772: INFO: Deleting pod "simpletest.rc-52j68" in namespace "gc-3096" Apr 16 13:49:50.786: INFO: Deleting pod "simpletest.rc-564d7" in namespace "gc-3096" Apr 16 13:49:50.797: INFO: Deleting pod "simpletest.rc-58hlg" in namespace "gc-3096" Apr 16 13:49:50.818: INFO: Deleting pod "simpletest.rc-62nzs" in namespace "gc-3096" Apr 16 13:49:50.840: INFO: Deleting pod "simpletest.rc-69c2d" in namespace "gc-3096" Apr 16 13:49:50.868: INFO: Deleting pod "simpletest.rc-6bhsc" in namespace "gc-3096" Apr 16 13:49:50.890: INFO: Deleting pod "simpletest.rc-6ctnr" in namespace "gc-3096" Apr 16 13:49:50.935: INFO: Deleting pod "simpletest.rc-6rtp5" in namespace "gc-3096" Apr 16 13:49:50.953: INFO: Deleting pod "simpletest.rc-6tdth" in namespace "gc-3096" Apr 16 13:49:50.969: INFO: Deleting pod "simpletest.rc-7ksjq" in namespace "gc-3096" Apr 16 13:49:51.030: INFO: Deleting pod "simpletest.rc-7sx8l" in namespace "gc-3096" Apr 16 13:49:51.043: INFO: Deleting pod "simpletest.rc-7tm76" in namespace "gc-3096" Apr 16 13:49:51.091: INFO: Deleting pod "simpletest.rc-88jlj" in namespace "gc-3096" Apr 16 13:49:51.109: INFO: Deleting pod "simpletest.rc-8jlnl" in namespace "gc-3096" Apr 16 13:49:51.127: INFO: Deleting pod "simpletest.rc-8mj5g" in namespace "gc-3096" Apr 16 13:49:51.161: INFO: Deleting pod "simpletest.rc-9gfp8" in namespace "gc-3096" Apr 16 13:49:51.192: INFO: Deleting pod "simpletest.rc-9khvt" in namespace "gc-3096" Apr 16 13:49:51.241: INFO: Deleting pod "simpletest.rc-9rwqn" in namespace "gc-3096" Apr 16 13:49:51.281: INFO: Deleting pod "simpletest.rc-9rzst" in namespace "gc-3096" Apr 16 13:49:51.311: INFO: Deleting pod "simpletest.rc-9wbt9" in namespace "gc-3096" Apr 16 13:49:51.338: INFO: Deleting pod "simpletest.rc-bkqj2" in namespace "gc-3096" Apr 16 13:49:51.379: INFO: Deleting pod "simpletest.rc-bnfzg" in namespace "gc-3096" Apr 16 13:49:51.413: INFO: Deleting pod "simpletest.rc-ccrh2" in namespace "gc-3096" Apr 16 13:49:51.461: INFO: Deleting pod "simpletest.rc-d99xn" in namespace "gc-3096" Apr 16 13:49:51.494: INFO: Deleting pod "simpletest.rc-dbjn7" in namespace "gc-3096" Apr 16 13:49:51.511: INFO: Deleting pod "simpletest.rc-ddw8w" in namespace "gc-3096" Apr 16 13:49:51.534: INFO: Deleting pod "simpletest.rc-dnmk7" in namespace "gc-3096" Apr 16 13:49:51.553: INFO: Deleting pod "simpletest.rc-dp4gb" in namespace "gc-3096" Apr 16 13:49:51.581: INFO: Deleting pod "simpletest.rc-fgmrt" in namespace "gc-3096" Apr 16 13:49:51.607: INFO: Deleting pod "simpletest.rc-frktf" in namespace "gc-3096" Apr 16 13:49:51.630: INFO: Deleting pod "simpletest.rc-gb2l5" in namespace "gc-3096" Apr 16 13:49:51.653: INFO: Deleting pod "simpletest.rc-gczlk" in namespace "gc-3096" Apr 16 13:49:51.671: INFO: Deleting pod "simpletest.rc-gpdc8" in namespace "gc-3096" Apr 16 13:49:51.698: INFO: Deleting pod "simpletest.rc-h264n" in namespace "gc-3096" Apr 16 13:49:51.756: INFO: Deleting pod "simpletest.rc-hkms9" in namespace "gc-3096" Apr 16 13:49:51.836: INFO: Deleting pod "simpletest.rc-hkq99" in namespace "gc-3096" Apr 16 13:49:51.863: INFO: Deleting pod "simpletest.rc-hpk7s" in namespace "gc-3096" Apr 16 13:49:51.907: INFO: Deleting pod "simpletest.rc-hq99n" in namespace "gc-3096" Apr 16 13:49:51.951: INFO: Deleting pod "simpletest.rc-hrcfg" in namespace "gc-3096" Apr 16 13:49:52.012: INFO: Deleting pod "simpletest.rc-jfk6q" in namespace "gc-3096" Apr 16 13:49:52.047: INFO: Deleting pod "simpletest.rc-js5nt" in namespace "gc-3096" Apr 16 13:49:52.084: INFO: Deleting pod "simpletest.rc-jsvfj" in namespace "gc-3096" Apr 16 13:49:52.110: INFO: Deleting pod "simpletest.rc-kdxqd" in namespace "gc-3096" Apr 16 13:49:52.123: INFO: Deleting pod "simpletest.rc-kldbx" in namespace "gc-3096" Apr 16 13:49:52.163: INFO: Deleting pod "simpletest.rc-ks6md" in namespace "gc-3096" Apr 16 13:49:52.194: INFO: Deleting pod "simpletest.rc-ktx8g" in namespace "gc-3096" Apr 16 13:49:52.309: INFO: Deleting pod "simpletest.rc-l4cds" in namespace "gc-3096" Apr 16 13:49:52.424: INFO: Deleting pod "simpletest.rc-l4xnp" in namespace "gc-3096" Apr 16 13:49:52.465: INFO: Deleting pod "simpletest.rc-l6vx8" in namespace "gc-3096" Apr 16 13:49:52.546: INFO: Deleting pod "simpletest.rc-lhwmg" in namespace "gc-3096" Apr 16 13:49:52.579: INFO: Deleting pod "simpletest.rc-m7fsn" in namespace "gc-3096" Apr 16 13:49:52.611: INFO: Deleting pod "simpletest.rc-m85xc" in namespace "gc-3096" Apr 16 13:49:52.628: INFO: Deleting pod "simpletest.rc-mml65" in namespace "gc-3096" Apr 16 13:49:52.679: INFO: Deleting pod "simpletest.rc-nvv7n" in namespace "gc-3096" Apr 16 13:49:52.722: INFO: Deleting pod "simpletest.rc-nxg8l" in namespace "gc-3096" Apr 16 13:49:52.752: INFO: Deleting pod "simpletest.rc-p4pgc" in namespace "gc-3096" Apr 16 13:49:52.776: INFO: Deleting pod "simpletest.rc-p5nfr" in namespace "gc-3096" Apr 16 13:49:52.808: INFO: Deleting pod "simpletest.rc-p77zw" in namespace "gc-3096" Apr 16 13:49:52.836: INFO: Deleting pod "simpletest.rc-p9xpj" in namespace "gc-3096" Apr 16 13:49:52.852: INFO: Deleting pod "simpletest.rc-pjbrp" in namespace "gc-3096" Apr 16 13:49:52.887: INFO: Deleting pod "simpletest.rc-q4tn7" in namespace "gc-3096" Apr 16 13:49:52.921: INFO: Deleting pod "simpletest.rc-qcfbb" in namespace "gc-3096" Apr 16 13:49:52.937: INFO: Deleting pod "simpletest.rc-qt6d7" in namespace "gc-3096" Apr 16 13:49:52.974: INFO: Deleting pod "simpletest.rc-r759g" in namespace "gc-3096" Apr 16 13:49:52.992: INFO: Deleting pod "simpletest.rc-rwvx2" in namespace "gc-3096" Apr 16 13:49:53.007: INFO: Deleting pod "simpletest.rc-s4d5w" in namespace "gc-3096" Apr 16 13:49:53.021: INFO: Deleting pod "simpletest.rc-shh22" in namespace "gc-3096" Apr 16 13:49:53.053: INFO: Deleting pod "simpletest.rc-skmmf" in namespace "gc-3096" Apr 16 13:49:53.081: INFO: Deleting pod "simpletest.rc-sp2kf" in namespace "gc-3096" Apr 16 13:49:53.099: INFO: Deleting pod "simpletest.rc-t64kt" in namespace "gc-3096" Apr 16 13:49:53.138: INFO: Deleting pod "simpletest.rc-t64p5" in namespace "gc-3096" Apr 16 13:49:53.185: INFO: Deleting pod "simpletest.rc-t8bzr" in namespace "gc-3096" Apr 16 13:49:53.199: INFO: Deleting pod "simpletest.rc-tqhhn" in namespace "gc-3096" Apr 16 13:49:53.252: INFO: Deleting pod "simpletest.rc-tr56s" in namespace "gc-3096" Apr 16 13:49:53.272: INFO: Deleting pod "simpletest.rc-vb4sx" in namespace "gc-3096" Apr 16 13:49:53.285: INFO: Deleting pod "simpletest.rc-vjt9h" in namespace "gc-3096" Apr 16 13:49:53.314: INFO: Deleting pod "simpletest.rc-vwkcx" in namespace "gc-3096" Apr 16 13:49:53.342: INFO: Deleting pod "simpletest.rc-vwt2m" in namespace "gc-3096" Apr 16 13:49:53.382: INFO: Deleting pod "simpletest.rc-vxlb5" in namespace "gc-3096" Apr 16 13:49:53.417: INFO: Deleting pod "simpletest.rc-vz6ts" in namespace "gc-3096" Apr 16 13:49:53.429: INFO: Deleting pod "simpletest.rc-w5wrs" in namespace "gc-3096" Apr 16 13:49:53.451: INFO: Deleting pod "simpletest.rc-wckjs" in namespace "gc-3096" Apr 16 13:49:53.477: INFO: Deleting pod "simpletest.rc-wd7mr" in namespace "gc-3096" Apr 16 13:49:53.490: INFO: Deleting pod "simpletest.rc-wgvc5" in namespace "gc-3096" Apr 16 13:49:53.513: INFO: Deleting pod "simpletest.rc-xbzgd" in namespace "gc-3096" Apr 16 13:49:53.539: INFO: Deleting pod "simpletest.rc-xk688" in namespace "gc-3096" Apr 16 13:49:53.571: INFO: Deleting pod "simpletest.rc-xq4kc" in namespace "gc-3096" Apr 16 13:49:53.624: INFO: Deleting pod "simpletest.rc-xtcn6" in namespace "gc-3096" Apr 16 13:49:53.639: INFO: Deleting pod "simpletest.rc-xz8k5" in namespace "gc-3096" Apr 16 13:49:53.658: INFO: Deleting pod "simpletest.rc-zf57h" in namespace "gc-3096" Apr 16 13:49:53.679: INFO: Deleting pod "simpletest.rc-zgds2" in namespace "gc-3096" Apr 16 13:49:53.717: INFO: Deleting pod "simpletest.rc-zhjrd" in namespace "gc-3096" Apr 16 13:49:53.742: INFO: Deleting pod "simpletest.rc-zxt8g" in namespace "gc-3096" Apr 16 13:49:53.759: INFO: Deleting pod "simpletest.rc-zzwf5" in namespace "gc-3096" [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:49:53.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3096" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":-1,"completed":57,"skipped":1210,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:49:53.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:49:56.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-9775" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":58,"skipped":1234,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:49:56.276: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:49:56.381: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 16 13:49:58.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6375 --namespace=crd-publish-openapi-6375 create -f -' Apr 16 13:49:59.457: INFO: stderr: "" Apr 16 13:49:59.457: INFO: stdout: "e2e-test-crd-publish-openapi-9337-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 16 13:49:59.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6375 --namespace=crd-publish-openapi-6375 delete e2e-test-crd-publish-openapi-9337-crds test-cr' Apr 16 13:49:59.544: INFO: stderr: "" Apr 16 13:49:59.544: INFO: stdout: "e2e-test-crd-publish-openapi-9337-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 16 13:49:59.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6375 --namespace=crd-publish-openapi-6375 apply -f -' Apr 16 13:49:59.730: INFO: stderr: "" Apr 16 13:49:59.731: INFO: stdout: "e2e-test-crd-publish-openapi-9337-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 16 13:49:59.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6375 --namespace=crd-publish-openapi-6375 delete e2e-test-crd-publish-openapi-9337-crds test-cr' Apr 16 13:49:59.814: INFO: stderr: "" Apr 16 13:49:59.814: INFO: stdout: "e2e-test-crd-publish-openapi-9337-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Apr 16 13:49:59.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-6375 explain e2e-test-crd-publish-openapi-9337-crds' Apr 16 13:50:00.007: INFO: stderr: "" Apr 16 13:50:00.007: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-9337-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:02.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-6375" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:49:10.232: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: creating service in namespace services-8860 Apr 16 13:49:10.271: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:49:12.281: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:49:14.281: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:49:16.288: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:49:18.287: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:49:20.327: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:49:22.276: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:49:24.275: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Apr 16 13:49:24.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Apr 16 13:49:24.691: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Apr 16 13:49:24.691: INFO: stdout: "iptables" Apr 16 13:49:24.691: INFO: proxyMode: iptables Apr 16 13:49:24.699: INFO: Waiting for pod kube-proxy-mode-detector to disappear Apr 16 13:49:24.703: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-8860 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-8860 I0416 13:49:24.739171 17 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-8860, replica count: 3 I0416 13:49:27.791242 17 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 0 running, 3 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 13:49:30.792351 17 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 1 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0416 13:49:33.793404 17 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 16 13:49:33.807: INFO: Creating new exec pod Apr 16 13:49:38.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec execpod-affinityc5gxn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Apr 16 13:49:39.100: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Apr 16 13:49:39.100: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:49:39.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec execpod-affinityc5gxn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.143.11.242 80' Apr 16 13:49:39.320: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.143.11.242 80\nConnection to 10.143.11.242 80 port [tcp/http] succeeded!\n" Apr 16 13:49:39.320: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:49:39.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec execpod-affinityc5gxn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 32123' Apr 16 13:49:39.508: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 32123\nConnection to 172.18.0.6 32123 port [tcp/*] succeeded!\n" Apr 16 13:49:39.508: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:49:39.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec execpod-affinityc5gxn -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 32123' Apr 16 13:49:39.657: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 32123\nConnection to 172.18.0.5 32123 port [tcp/*] succeeded!\n" Apr 16 13:49:39.657: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 16 13:49:39.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec execpod-affinityc5gxn -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:32123/ ; done' Apr 16 13:49:39.962: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n" Apr 16 13:49:39.962: INFO: stdout: "\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs\naffinity-nodeport-timeout-64rcs" Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Received response from host: affinity-nodeport-timeout-64rcs Apr 16 13:49:39.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec execpod-affinityc5gxn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.4:32123/' Apr 16 13:49:40.119: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n" Apr 16 13:49:40.119: INFO: stdout: "affinity-nodeport-timeout-64rcs" Apr 16 13:50:00.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-8860 exec execpod-affinityc5gxn -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.4:32123/' Apr 16 13:50:00.309: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.4:32123/\n" Apr 16 13:50:00.309: INFO: stdout: "affinity-nodeport-timeout-xtsqv" Apr 16 13:50:00.309: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-8860, will wait for the garbage collector to delete the pods Apr 16 13:50:00.378: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 5.962241ms Apr 16 13:50:00.479: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 100.886399ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:02.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8860" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":812,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:02.865: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-5244d051-c1b7-4234-8fe5-938b1d3d5012 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 16 13:50:02.914: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ba2c33c-8461-41ad-a4c7-98ba6ab40026" in namespace "projected-5206" to be "Succeeded or Failed" Apr 16 13:50:02.917: INFO: Pod "pod-projected-configmaps-9ba2c33c-8461-41ad-a4c7-98ba6ab40026": Phase="Pending", Reason="", readiness=false. Elapsed: 3.437014ms Apr 16 13:50:04.923: INFO: Pod "pod-projected-configmaps-9ba2c33c-8461-41ad-a4c7-98ba6ab40026": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009161293s �[1mSTEP�[0m: Saw pod success Apr 16 13:50:04.923: INFO: Pod "pod-projected-configmaps-9ba2c33c-8461-41ad-a4c7-98ba6ab40026" satisfied condition "Succeeded or Failed" Apr 16 13:50:04.926: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-projected-configmaps-9ba2c33c-8461-41ad-a4c7-98ba6ab40026 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:50:04.945: INFO: Waiting for pod pod-projected-configmaps-9ba2c33c-8461-41ad-a4c7-98ba6ab40026 to disappear Apr 16 13:50:04.947: INFO: Pod pod-projected-configmaps-9ba2c33c-8461-41ad-a4c7-98ba6ab40026 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:04.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5206" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":839,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:05.038: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 16 13:50:05.078: INFO: Waiting up to 5m0s for pod "pod-bcf56846-7083-4f2a-b41c-7401608e0359" in namespace "emptydir-7883" to be "Succeeded or Failed" Apr 16 13:50:05.082: INFO: Pod "pod-bcf56846-7083-4f2a-b41c-7401608e0359": Phase="Pending", Reason="", readiness=false. Elapsed: 3.697782ms Apr 16 13:50:07.087: INFO: Pod "pod-bcf56846-7083-4f2a-b41c-7401608e0359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008434532s �[1mSTEP�[0m: Saw pod success Apr 16 13:50:07.087: INFO: Pod "pod-bcf56846-7083-4f2a-b41c-7401608e0359" satisfied condition "Succeeded or Failed" Apr 16 13:50:07.089: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-bcf56846-7083-4f2a-b41c-7401608e0359 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:50:07.104: INFO: Waiting for pod pod-bcf56846-7083-4f2a-b41c-7401608e0359 to disappear Apr 16 13:50:07.106: INFO: Pod pod-bcf56846-7083-4f2a-b41c-7401608e0359 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:07.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7883" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":887,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":59,"skipped":1241,"failed":0} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:02.301: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 16 13:50:02.345: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:50:04.349: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 16 13:50:04.358: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:50:06.362: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 16 13:50:06.382: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 13:50:06.386: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 13:50:08.386: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 13:50:08.390: INFO: Pod pod-with-poststart-http-hook still exists Apr 16 13:50:10.387: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 16 13:50:10.390: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:10.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-1045" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1241,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:07.134: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:50:07.688: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:50:10.709: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:10.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5424" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5424-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":46,"skipped":898,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:10.960: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:10.990: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-8725 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:17.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-42" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:17.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-8725" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":47,"skipped":964,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:10.405: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: set up a multi version CRD Apr 16 13:50:10.434: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:25.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9334" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":61,"skipped":1244,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:25.084: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:50:25.114: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:26.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-2512" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":62,"skipped":1277,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:26.186: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 16 13:50:26.697: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 16 13:50:26.713: INFO: waiting for watch events with expected annotations Apr 16 13:50:26.714: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:26.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-7715" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":63,"skipped":1306,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:26.788: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Apr 16 13:50:26.829: INFO: Waiting up to 5m0s for pod "pod-a736ec83-329a-4f08-aebe-b525583c16bf" in namespace "emptydir-4247" to be "Succeeded or Failed" Apr 16 13:50:26.832: INFO: Pod "pod-a736ec83-329a-4f08-aebe-b525583c16bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.879014ms Apr 16 13:50:28.839: INFO: Pod "pod-a736ec83-329a-4f08-aebe-b525583c16bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010075671s �[1mSTEP�[0m: Saw pod success Apr 16 13:50:28.839: INFO: Pod "pod-a736ec83-329a-4f08-aebe-b525583c16bf" satisfied condition "Succeeded or Failed" Apr 16 13:50:28.842: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-a736ec83-329a-4f08-aebe-b525583c16bf container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:50:28.854: INFO: Waiting for pod pod-a736ec83-329a-4f08-aebe-b525583c16bf to disappear Apr 16 13:50:28.856: INFO: Pod pod-a736ec83-329a-4f08-aebe-b525583c16bf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:28.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4247" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1320,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:17.099: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create Endpoints and EndpointSlices for Pods matching a Service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: referencing a single matching pod �[1mSTEP�[0m: referencing matching pods with named port �[1mSTEP�[0m: creating empty Endpoints and EndpointSlices for no matching Pods �[1mSTEP�[0m: recreating EndpointSlices after they've been deleted Apr 16 13:50:37.269: INFO: EndpointSlice for Service endpointslice-4326/example-named-port not found [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:47.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-4326" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":-1,"completed":48,"skipped":971,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:47.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Apr 16 13:50:47.345: INFO: Waiting up to 5m0s for pod "pod-c05d0a4f-1fa8-4fd7-8585-338738071518" in namespace "emptydir-7553" to be "Succeeded or Failed" Apr 16 13:50:47.348: INFO: Pod "pod-c05d0a4f-1fa8-4fd7-8585-338738071518": Phase="Pending", Reason="", readiness=false. Elapsed: 3.203917ms Apr 16 13:50:49.352: INFO: Pod "pod-c05d0a4f-1fa8-4fd7-8585-338738071518": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007558398s �[1mSTEP�[0m: Saw pod success Apr 16 13:50:49.352: INFO: Pod "pod-c05d0a4f-1fa8-4fd7-8585-338738071518" satisfied condition "Succeeded or Failed" Apr 16 13:50:49.356: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-jbucf3 pod pod-c05d0a4f-1fa8-4fd7-8585-338738071518 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:50:49.372: INFO: Waiting for pod pod-c05d0a4f-1fa8-4fd7-8585-338738071518 to disappear Apr 16 13:50:49.375: INFO: Pod pod-c05d0a4f-1fa8-4fd7-8585-338738071518 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:49.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7553" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":979,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:49.392: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating secret with name secret-test-1ad788dd-b109-4e9e-9126-f5e568053db4 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 16 13:50:49.455: INFO: Waiting up to 5m0s for pod "pod-secrets-76630168-8f28-41dd-ab58-1552fa8f8d14" in namespace "secrets-2159" to be "Succeeded or Failed" Apr 16 13:50:49.458: INFO: Pod "pod-secrets-76630168-8f28-41dd-ab58-1552fa8f8d14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.975351ms Apr 16 13:50:51.463: INFO: Pod "pod-secrets-76630168-8f28-41dd-ab58-1552fa8f8d14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007830992s �[1mSTEP�[0m: Saw pod success Apr 16 13:50:51.463: INFO: Pod "pod-secrets-76630168-8f28-41dd-ab58-1552fa8f8d14" satisfied condition "Succeeded or Failed" Apr 16 13:50:51.467: INFO: Trying to get logs from node k8s-upgrade-and-conformance-3a12zq-worker-e11j1x pod pod-secrets-76630168-8f28-41dd-ab58-1552fa8f8d14 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 16 13:50:51.481: INFO: Waiting for pod pod-secrets-76630168-8f28-41dd-ab58-1552fa8f8d14 to disappear Apr 16 13:50:51.483: INFO: Pod pod-secrets-76630168-8f28-41dd-ab58-1552fa8f8d14 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:51.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-2159" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-3414" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":983,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:51.527: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 16 13:50:51.564: INFO: Created pod &Pod{ObjectMeta:{test-dns-nameservers dns-2377 8e6b8037-dc32-4199-a98e-25a02b4a9687 17992 0 2022-04-16 13:50:51 +0000 UTC <nil> <nil> map[] map[] [] [] [{e2e.test Update v1 2022-04-16 13:50:51 +0000 UTC FieldsV1 {"f:spec":{"f:containers":{"k:{\"name\":\"agnhost-container\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsConfig":{".":{},"f:nameservers":{},"f:searches":{}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-p5bbf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost-container,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p5bbf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 16 13:50:51.567: INFO: The status of Pod test-dns-nameservers is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:50:53.571: INFO: The status of Pod test-dns-nameservers is Running (Ready = true) �[1mSTEP�[0m: Verifying customized DNS suffix list is configured on pod... Apr 16 13:50:53.571: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-2377 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:50:53.571: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:50:53.572: INFO: ExecWithOptions: Clientset creation Apr 16 13:50:53.572: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-2377/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-suffix&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: Verifying customized DNS server is configured on pod... Apr 16 13:50:53.670: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-2377 PodName:test-dns-nameservers ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:50:53.670: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:50:53.671: INFO: ExecWithOptions: Clientset creation Apr 16 13:50:53.671: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/dns-2377/pods/test-dns-nameservers/exec?command=%2Fagnhost&command=dns-server-list&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:50:53.777: INFO: Deleting pod test-dns-nameservers... [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:53.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2377" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":-1,"completed":51,"skipped":1007,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:53.809: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:50:53.839: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 16 13:50:55.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3009 --namespace=crd-publish-openapi-3009 create -f -' Apr 16 13:50:56.761: INFO: stderr: "" Apr 16 13:50:56.761: INFO: stdout: "e2e-test-crd-publish-openapi-795-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 16 13:50:56.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3009 --namespace=crd-publish-openapi-3009 delete e2e-test-crd-publish-openapi-795-crds test-cr' Apr 16 13:50:56.850: INFO: stderr: "" Apr 16 13:50:56.850: INFO: stdout: "e2e-test-crd-publish-openapi-795-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 16 13:50:56.850: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3009 --namespace=crd-publish-openapi-3009 apply -f -' Apr 16 13:50:57.060: INFO: stderr: "" Apr 16 13:50:57.060: INFO: stdout: "e2e-test-crd-publish-openapi-795-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 16 13:50:57.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3009 --namespace=crd-publish-openapi-3009 delete e2e-test-crd-publish-openapi-795-crds test-cr' Apr 16 13:50:57.140: INFO: stderr: "" Apr 16 13:50:57.140: INFO: stdout: "e2e-test-crd-publish-openapi-795-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Apr 16 13:50:57.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3009 explain e2e-test-crd-publish-openapi-795-crds' Apr 16 13:50:57.341: INFO: stderr: "" Apr 16 13:50:57.341: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-795-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:50:59.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3009" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":52,"skipped":1019,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:50:59.601: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 16 13:51:00.199: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 16 13:51:02.210: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 16, 13, 51, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 51, 0, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 16, 13, 51, 0, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 16, 13, 51, 0, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 16 13:51:05.228: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:51:05.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6850" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6850-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":53,"skipped":1033,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:51:05.357: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 Apr 16 13:51:05.428: INFO: The status of Pod busybox-scheduling-7be71f1e-e493-4d80-b75f-2d8b2282706c is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:51:07.433: INFO: The status of Pod busybox-scheduling-7be71f1e-e493-4d80-b75f-2d8b2282706c is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 16 13:51:07.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-5562" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1049,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 16 13:46:14.346: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3131 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 16 13:46:14.372: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 16 13:46:14.411: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 16 13:46:16.416: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:18.417: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:20.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:22.416: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:24.414: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:26.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:28.414: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:30.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:32.415: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 16 13:46:34.415: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 16 13:46:34.426: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 16 13:46:34.431: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 16 13:46:34.437: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 16 13:46:36.454: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 16 13:46:36.455: INFO: Breadth first check of 192.168.0.32 on host 172.18.0.4... Apr 16 13:46:36.458: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.65:9080/dial?request=hostname&protocol=http&host=192.168.0.32&port=8083&tries=1'] Namespace:pod-network-test-3131 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:46:36.458: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:46:36.459: INFO: ExecWithOptions: Clientset creation Apr 16 13:46:36.459: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3131/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.65%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.32%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:46:36.564: INFO: Waiting for responses: map[] Apr 16 13:46:36.564: INFO: reached 192.168.0.32 after 0/1 tries Apr 16 13:46:36.564: INFO: Breadth first check of 192.168.2.64 on host 172.18.0.7... Apr 16 13:46:36.567: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.65:9080/dial?request=hostname&protocol=http&host=192.168.2.64&port=8083&tries=1'] Namespace:pod-network-test-3131 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:46:36.567: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:46:36.568: INFO: ExecWithOptions: Clientset creation Apr 16 13:46:36.568: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3131/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.65%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.64%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:46:36.668: INFO: Waiting for responses: map[] Apr 16 13:46:36.668: INFO: reached 192.168.2.64 after 0/1 tries Apr 16 13:46:36.668: INFO: Breadth first check of 192.168.3.24 on host 172.18.0.6... Apr 16 13:46:36.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.65:9080/dial?request=hostname&protocol=http&host=192.168.3.24&port=8083&tries=1'] Namespace:pod-network-test-3131 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 16 13:46:36.671: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 16 13:46:36.672: INFO: ExecWithOptions: Clientset creation Apr 16 13:46:36.672: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-3131/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.65%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.3.24%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 16 13:46:41.745: INFO: Waiting for responses: map[netserver-2:{}] Apr 16 13:46:43.745: INFO: Output of kubectl describe pod pod-network-test-3131/netserver-0: Apr 16 13:46:43.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-3131 describe pod netserver-0 --namespace=pod-network-test-3131' Apr 16 13:46:43.829: INFO: stderr: "" Apr 16 13:46:43.829: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-3131\nPriority: 0\nNode: k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x/172.18.0.4\nStart Time: Sat, 16 Apr 2022 13:46:14 +0000\nLabels: selector-50f1befb-16fc-46a6-b424-3e90e28b2d0c=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.32\nIPs:\n IP: 192.168.0.32\nContainers:\n webserver:\n Container ID: containerd://12415ad188526277e635bee1b9949438deb96d549e7c20f32839090ee771c41a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Sat, 16 Apr 2022 13:46:15 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4kscg (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-4kscg:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-3a12zq-md-0-64689c6cd-pdn9x\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 29s default-scheduler Successfully assigned pod-network-test-3131/netserver-0 to